From patchwork Sat Feb 10 00:23:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199162 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1214137dyd; Fri, 9 Feb 2024 16:24:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IE6jNT9AeC4c9TfUkumPYJ8ajv6JNN2EPEDFomoGt0iKoEIi80A+NTi0g3wxmgOhe6fOGLH X-Received: by 2002:a05:620a:1a27:b0:785:c033:8b74 with SMTP id bk39-20020a05620a1a2700b00785c0338b74mr817846qkb.58.1707524672604; Fri, 09 Feb 2024 16:24:32 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524672; cv=pass; d=google.com; s=arc-20160816; b=L3w21FjVlrWfNqwszUT1D5KBAYgfvsKkv1zDPw7KHGgawPCNIStj5fSkKNGayxIAM6 KZPV6b+AyQErLtNoj1lLGVsriFa1j/mXlIReBZqlE1obv/CZ369ozzGUzhAYtn9hFfJ+ EIzpb5dDKct4moN3D6QCsTX2sShru+e+Vzp+XzssxFgFfWBdPp3f6Udt3U2cLRotMRlx mNByzvEgA5z0yKIcmco+lCVrqV6iO8hPnQAGvfnkQxH/uakvLN5Q2CrcP8V0fthaz+jQ W3AQz5zOUnvWhtjAUoomVGne6+IOSfzHmf4pYsrgw8EGLWgkhKFYaEbC4SazJwPzJkHO jKrw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=v86i1p3MWyXXcX4NFIra3i25a9IIHW2aJTwm0xruHMc=; fh=LFyLBT93bwW8J/DsVM7vadyi8mtxE3iAQgiFloyj3mA=; b=NuBBqbzySa8Gls1X/OB8hp8T1HhOX09DNM2biE0ZXRdqf77czsGf86HhSCPywEa2lm dnMNKLxjs3TFuSK9xw6ObEOB7P/YGWFTZI6HWn5+A8J8ORxcq+GEZTy7cM/StiVy7+DZ X/hRDOrupzjuoWIGuH1Jh4mi9Qw8X9tbROhmNI3Ilq4zrd5npHUPSiGgyCETN3Hq6F0z TDBYWYljvdVnQrGMSMzDdhVj6YdiRFVuDKAuGRpcMXPfDOO+YBvSXVZ2rrr65hXyn+RX U7ax2A4CaQ3XIDCWnCIC3WF1OVdj64/wbyDE3W8N+nkUn2m2sclYVliTGf9JnTd4eIyW XjWQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Q+aknnIy; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60179-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60179-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCXvthpHYXPe8nYOUt1h1P+Bc3wcAG9g1Z2N8I1rwjxOCimSUHh3wxNLFXxt24jJqd/YccMgOeiciHSJy0jdFTesM/oOOQ== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id b19-20020a05620a271300b0077a0d2652absi741216qkp.759.2024.02.09.16.24.32 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:24:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60179-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Q+aknnIy; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60179-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60179-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 556BB1C21F23 for ; Sat, 10 Feb 2024 00:24:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D6BD75220; Sat, 10 Feb 2024 00:23:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q+aknnIy" Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E36B366 for ; Sat, 10 Feb 2024 00:23:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524620; cv=none; b=VPeZx5ytAZXiuXaCK1yDbh+koQWL/tj0MuOL1/VQoKK1DlVPjEFBy5Wl2XGaqc2EE60iW0SY1T+TvlapBskiuq68riDvKbGpOo8fOt+zueGnB9vzJg4Jf93MP89IrtPTx/5AKnF0Rjaovn6Vpcp9iiHA4qYmKiYHm0XAlLQA8RQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524620; c=relaxed/simple; bh=IKBIZO7VNkBwuj4wxvTwPDEHpZ4rKFOKNisGzOKffb0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jusLE/2FFqiVtekuVfQIVgfUIRR3q3yYUfc0ga7bhKVJzgdvyVqk1J9p6Bhvd5DfXW/hb2NyAiieV9GakN6Bd2tVCFMzwbWGeSUozuM53Xny7E4D1PG7O3wJyrJAjcM4R9UUyCc3NTGpm3jygluaFRZ0Gjx9JzVUJSxXguD4X7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Q+aknnIy; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5d1bffa322eso1870008a12.1 for ; Fri, 09 Feb 2024 16:23:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524618; x=1708129418; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v86i1p3MWyXXcX4NFIra3i25a9IIHW2aJTwm0xruHMc=; b=Q+aknnIyChfE4JQ0r+h61TDJd/HV6fvE/m8WbfI4VSSzO+DO/LGwz9qe+/xtC+Ck4U 9SOvL/o5pehL64hQbiRUuYAZqAscFoJfwORcYex49zf920fvDRArL9mINb2nAm2TJo4S DIedNAeEzG/0r/iKbuz2MFXIT2i56QWuZ9tg4Nulr1vZKU82JLCyEc8FzBQF+jPofENt uhBIb3MQWE8DeJEIXtFw0nij8hHJRaijAOExrw/O7uxQR9rAGaxC4JYnNhRHVb9wh/wW jQpFpoqWmY11mnZi8yLXf3NiFca2CqiQ4L8iejAKmup/ii6K0b5bBnSXCzdBfDlhI5D+ e+WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524618; x=1708129418; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v86i1p3MWyXXcX4NFIra3i25a9IIHW2aJTwm0xruHMc=; b=dXK591uV57LGl37AVEa/F9pENaiRrSi9pnrxBwfTObfxt+rjVKk/wxWRfTY19xNLew TaksYeh0XbP6A5Za8Qeq5qfkVfuubEVu65dP+oaFsmziEDV8otuhVHIr+cxt3YwPr1kd aW87YbwVvjchWlc842/RKYNbSkmwi+1+fY3B0YnB4Ao1mIV5Ga3ml3KdRbWyfnhL2oEO 9rFtNeluMP1hP3js0NPPU+xp8sZ6TR9uFf6VxkfqJel24pcv+E0zpJP6GMlTjtySnsJH qsUhFAXe4XTbZyReXjyCv2QkpnzPy7Ft6nrydieoEmcCYATSHqlcgA9eSRRcTWzXR1p4 NdRg== X-Gm-Message-State: AOJu0YxgxdD7ofdTK77ruTzrVisIxPJG9CWeCmrtvCzuhw5lGusiqOmh HrxbQ7z+NjR2sYkrg/4qSdbVOw2UYGK+Fa+ktrPk+wuobGcmFXIoCUylG3seImxWwIjCQuje4GX +SkuUkyI8/Hjn1xeAq6RgJt6kmadq4TsVKbEGdjTqkdxj8yh6/r4BLgvkZukb3GmXBlwAM4HmP0 GYEvBJFLeMDs7GcRpu1Q5eEEaIm5sEdoFRXjY/rVXSs3Ev X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a02:185:b0:5dc:20fb:6698 with SMTP id bj5-20020a056a02018500b005dc20fb6698mr1422pgb.10.1707524617455; Fri, 09 Feb 2024 16:23:37 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:10 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-2-jstultz@google.com> Subject: [PATCH v8 1/7] locking/mutex: Remove wakeups from under mutex::wait_lock From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469391439315914 X-GMAIL-MSGID: 1790469391439315914 In preparation to nest mutex::wait_lock under rq::lock we need to remove wakeups from under it. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) [Heavily changed after 55f036ca7e74 ("locking: WW mutex cleanup") and 08295b3b5bee ("locking: Implement an algorithm choice for Wound-Wait mutexes")] Signed-off-by: Juri Lelli [jstultz: rebased to mainline, added extra wake_up_q & init to avoid hangs, similar to Connor's rework of this patch] Signed-off-by: John Stultz --- v5: * Reverted back to an earlier version of this patch to undo the change that kept the wake_q in the ctx structure, as that broke the rule that the wake_q must always be on the stack, as its not safe for concurrency. v6: * Made tweaks suggested by Waiman Long v7: * Fixups to pass wake_qs down for PREEMPT_RT logic --- kernel/locking/mutex.c | 17 +++++++++++++---- kernel/locking/rtmutex.c | 26 +++++++++++++++++--------- kernel/locking/rwbase_rt.c | 4 +++- kernel/locking/rwsem.c | 4 ++-- kernel/locking/spinlock_rt.c | 3 ++- kernel/locking/ww_mutex.h | 29 ++++++++++++++++++----------- 6 files changed, 55 insertions(+), 28 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index cbae8c0b89ab..980ce630232c 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -575,6 +575,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas struct lockdep_map *nest_lock, unsigned long ip, struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) { + DEFINE_WAKE_Q(wake_q); struct mutex_waiter waiter; struct ww_mutex *ww; int ret; @@ -625,7 +626,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas */ if (__mutex_trylock(lock)) { if (ww_ctx) - __ww_mutex_check_waiters(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx, &wake_q); goto skip_wait; } @@ -645,7 +646,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas * Add in stamp order, waking up waiters that must kill * themselves. */ - ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx); + ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx, &wake_q); if (ret) goto err_early_kill; } @@ -681,6 +682,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas } raw_spin_unlock(&lock->wait_lock); + /* Make sure we do wakeups before calling schedule */ + if (!wake_q_empty(&wake_q)) { + wake_up_q(&wake_q); + wake_q_init(&wake_q); + } schedule_preempt_disabled(); first = __mutex_waiter_is_first(lock, &waiter); @@ -714,7 +720,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas */ if (!ww_ctx->is_wait_die && !__mutex_waiter_is_first(lock, &waiter)) - __ww_mutex_check_waiters(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx, &wake_q); } __mutex_remove_waiter(lock, &waiter); @@ -730,6 +736,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas ww_mutex_lock_acquired(ww, ww_ctx); raw_spin_unlock(&lock->wait_lock); + wake_up_q(&wake_q); preempt_enable(); return 0; @@ -741,6 +748,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas raw_spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); + wake_up_q(&wake_q); preempt_enable(); return ret; } @@ -934,6 +942,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne } } + preempt_disable(); raw_spin_lock(&lock->wait_lock); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { @@ -952,8 +961,8 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne __mutex_handoff(lock, next); raw_spin_unlock(&lock->wait_lock); - wake_up_q(&wake_q); + preempt_enable(); } #ifndef CONFIG_DEBUG_LOCK_ALLOC diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 4a10e8c16fd2..eaac8b196a69 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -34,13 +34,15 @@ static inline int __ww_mutex_add_waiter(struct rt_mutex_waiter *waiter, struct rt_mutex *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { return 0; } static inline void __ww_mutex_check_waiters(struct rt_mutex *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { } @@ -1206,6 +1208,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, struct rt_mutex_waiter *top_waiter = waiter; struct rt_mutex_base *next_lock; int chain_walk = 0, res; + DEFINE_WAKE_Q(wake_q); lockdep_assert_held(&lock->wait_lock); @@ -1244,7 +1247,8 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, /* Check whether the waiter should back out immediately */ rtm = container_of(lock, struct rt_mutex, rtmutex); - res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx); + res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx, &wake_q); + wake_up_q(&wake_q); if (res) { raw_spin_lock(&task->pi_lock); rt_mutex_dequeue(lock, waiter); @@ -1677,7 +1681,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, unsigned int state, enum rtmutex_chainwalk chwalk, - struct rt_mutex_waiter *waiter) + struct rt_mutex_waiter *waiter, + struct wake_q_head *wake_q) { struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex); struct ww_mutex *ww = ww_container_of(rtm); @@ -1688,7 +1693,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, /* Try to acquire the lock again: */ if (try_to_take_rt_mutex(lock, current, NULL)) { if (build_ww_mutex() && ww_ctx) { - __ww_mutex_check_waiters(rtm, ww_ctx); + __ww_mutex_check_waiters(rtm, ww_ctx, wake_q); ww_mutex_lock_acquired(ww, ww_ctx); } return 0; @@ -1706,7 +1711,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, /* acquired the lock */ if (build_ww_mutex() && ww_ctx) { if (!ww_ctx->is_wait_die) - __ww_mutex_check_waiters(rtm, ww_ctx); + __ww_mutex_check_waiters(rtm, ww_ctx, wake_q); ww_mutex_lock_acquired(ww, ww_ctx); } } else { @@ -1728,7 +1733,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, - unsigned int state) + unsigned int state, + struct wake_q_head *wake_q) { struct rt_mutex_waiter waiter; int ret; @@ -1737,7 +1743,7 @@ static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock, waiter.ww_ctx = ww_ctx; ret = __rt_mutex_slowlock(lock, ww_ctx, state, RT_MUTEX_MIN_CHAINWALK, - &waiter); + &waiter, wake_q); debug_rt_mutex_free_waiter(&waiter); return ret; @@ -1753,6 +1759,7 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, unsigned int state) { + DEFINE_WAKE_Q(wake_q); unsigned long flags; int ret; @@ -1774,8 +1781,9 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, * irqsave/restore variants. */ raw_spin_lock_irqsave(&lock->wait_lock, flags); - ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state); + ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state, &wake_q); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + wake_up_q(&wake_q); rt_mutex_post_schedule(); return ret; diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 34a59569db6b..e9d2f38b70f3 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -69,6 +69,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, unsigned int state) { struct rt_mutex_base *rtm = &rwb->rtmutex; + DEFINE_WAKE_Q(wake_q); int ret; rwbase_pre_schedule(); @@ -110,7 +111,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, * For rwlocks this returns 0 unconditionally, so the below * !ret conditionals are optimized out. */ - ret = rwbase_rtmutex_slowlock_locked(rtm, state); + ret = rwbase_rtmutex_slowlock_locked(rtm, state, &wake_q); /* * On success the rtmutex is held, so there can't be a writer @@ -122,6 +123,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, if (!ret) atomic_inc(&rwb->readers); raw_spin_unlock_irq(&rtm->wait_lock); + wake_up_q(&wake_q); if (!ret) rwbase_rtmutex_unlock(rtm); diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index 2340b6d90ec6..74ebb2915d63 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1415,8 +1415,8 @@ static inline void __downgrade_write(struct rw_semaphore *sem) #define rwbase_rtmutex_lock_state(rtm, state) \ __rt_mutex_lock(rtm, state) -#define rwbase_rtmutex_slowlock_locked(rtm, state) \ - __rt_mutex_slowlock_locked(rtm, NULL, state) +#define rwbase_rtmutex_slowlock_locked(rtm, state, wq) \ + __rt_mutex_slowlock_locked(rtm, NULL, state, wq) #define rwbase_rtmutex_unlock(rtm) \ __rt_mutex_unlock(rtm) diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 38e292454fcc..fb1810a14c9d 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -162,7 +162,8 @@ rwbase_rtmutex_lock_state(struct rt_mutex_base *rtm, unsigned int state) } static __always_inline int -rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state) +rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state, + struct wake_q_head *wake_q) { rtlock_slowlock_locked(rtm); return 0; diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 3ad2cc4823e5..7189c6631d90 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -275,7 +275,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) */ static bool __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, struct wake_q_head *wake_q) { if (!ww_ctx->is_wait_die) return false; @@ -284,7 +284,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, #ifndef WW_RT debug_mutex_wake_waiter(lock, waiter); #endif - wake_up_process(waiter->task); + wake_q_add(wake_q, waiter->task); } return true; @@ -299,7 +299,8 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, */ static bool __ww_mutex_wound(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx, - struct ww_acquire_ctx *hold_ctx) + struct ww_acquire_ctx *hold_ctx, + struct wake_q_head *wake_q) { struct task_struct *owner = __ww_mutex_owner(lock); @@ -331,7 +332,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * wakeup pending to re-read the wounded state. */ if (owner != current) - wake_up_process(owner); + wake_q_add(wake_q, owner); return true; } @@ -352,7 +353,8 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * The current task must not be on the wait list. */ static void -__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) +__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { struct MUTEX_WAITER *cur; @@ -364,8 +366,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) if (!cur->ww_ctx) continue; - if (__ww_mutex_die(lock, cur, ww_ctx) || - __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) + if (__ww_mutex_die(lock, cur, ww_ctx, wake_q) || + __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx, wake_q)) break; } } @@ -377,6 +379,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { + DEFINE_WAKE_Q(wake_q); + ww_mutex_lock_acquired(lock, ctx); /* @@ -405,8 +409,10 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) * die or wound us. */ lock_wait_lock(&lock->base); - __ww_mutex_check_waiters(&lock->base, ctx); + __ww_mutex_check_waiters(&lock->base, ctx, &wake_q); unlock_wait_lock(&lock->base); + + wake_up_q(&wake_q); } static __always_inline int @@ -488,7 +494,8 @@ __ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter, static inline int __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, struct MUTEX *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { struct MUTEX_WAITER *cur, *pos = NULL; bool is_wait_die; @@ -532,7 +539,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, pos = cur; /* Wait-Die: ensure younger waiters die. */ - __ww_mutex_die(lock, cur, ww_ctx); + __ww_mutex_die(lock, cur, ww_ctx, wake_q); } __ww_waiter_add(lock, waiter, pos); @@ -550,7 +557,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, * such that either we or the fastpath will wound @ww->ctx. */ smp_mb(); - __ww_mutex_wound(lock, ww_ctx, ww->ctx); + __ww_mutex_wound(lock, ww_ctx, ww->ctx, wake_q); } return 0; From patchwork Sat Feb 10 00:23:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199168 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1215964dyd; Fri, 9 Feb 2024 16:29:26 -0800 (PST) X-Google-Smtp-Source: AGHT+IHX4s4NlMeaXJkInrKTA/+CtDagkPfbva66HfdeJqQbvaS1C0kXxCQKliovMFRpI3SwJLzf X-Received: by 2002:a05:6808:2382:b0:3bf:f4bb:7630 with SMTP id bp2-20020a056808238200b003bff4bb7630mr769223oib.45.1707524966706; Fri, 09 Feb 2024 16:29:26 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524966; cv=pass; d=google.com; s=arc-20160816; b=NRJF+jR3ohGCZJrRZjgpwS4Jws7sOjS2x3vZzBcb2KWD+9PnjOGYEyc324t0WmwCPT r8IUHScCvYrFpQ1QvnIU7ogvg/mrNFi1AVTG345R/7lxLMQZcm2sZ32FHgcxyK/sLdWq bOKL9tMCnD+36Q5x5/Xx+TkCixjLTJm5VzujRGLc4LugKdhZPeF8LnzHvllNF1k/3v3R rPVZG3jmSI3ukhh+FxZtZ6g3O7YFA9+eLSZ00m99wCm4inStYMiI5r97ULBgv2UW+cq9 Thz8amrrkeX02R0xj5MKYO4o7mG9XwH4yXEBq+ootjHnPMJYdVTRKdFEqBfi2MJhAmfJ RIsw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=XAy3C0x/5qzXDxILuLg04WzqVd2kbP9Lyu9Y6MgUWYE=; fh=Hxcs6laAEn+xRs6Ww7fwmUqLehFT/mdk2eFXHGwn8O0=; b=oZfdGXiKKz/sEDVEp7YNtGRfWmggcQ6VGVSLNcn9+wYtZtO0WpTkU5xwjQMn3HrBvA T4jiK3Yy/T0YRnul/pdumMCLWmWXRndawZqAkI6VeWiNAsWCvM3kSZH9ABd8vNRAIByf LKQjBY6TCnaJhAVrHFcm6yUNQpmsF9Vi6V5FBwhv2Tpjn09QM3oFSgOuEy9MnObgDD5L tgFfJ3YcEIKA7FeiJqOl0p2f61xfjxvKlI1zmyQICi4fqcWspi+yp43iaD/+ZaspsQip 49zj+wJuW8z17naEFXqK0rYRF8rPawH4rKy726kbOYsSGfc5+gkTabHCyLaUrsZXwnLX YgdQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ayXl80aV; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60180-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60180-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCWux8T8mHNXhVbJ8D1pIrsDsdJ3VrD+YBD6px4Zs+2OqfZKCopMhAzjYvswAxXT3zsWBqiZrNCnWWOt0r1F4+78HoEn5g== Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id y184-20020a638ac1000000b005dc3dcc360asi2467133pgd.674.2024.02.09.16.29.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:29:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60180-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=ayXl80aV; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60180-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60180-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 6765BB23C1A for ; Sat, 10 Feb 2024 00:24:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0FE1E5234; Sat, 10 Feb 2024 00:23:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ayXl80aV" Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D30BA40 for ; Sat, 10 Feb 2024 00:23:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524621; cv=none; b=csOp3/KdhcsAZup2t2e0Hzv0GckYMFVDshYvP1TZZvOz/T3K2YhTUVfM0+sy5lt0YiWvU9+YyNnaCOKP9I+j3KOdrhOqfeTlEcmxAQKV27l2xNqcpoNChCCBJ9vnqGqk3tp+O9X9lbATQ0I4+DqKY13NT7q0dgBQj4HfsUpcOKI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524621; c=relaxed/simple; bh=IRUqYFGJOvMutAvsPxcmasMWOBYCkWCHI8SeBldW878=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pFye0zEgMDkuO1Zc8QVArfi3ddrdeGmsD1x8sdmbKncmJog4Um1h14kg2IgpGrTguLmlfSpng420iGLxH2EcRm9wWEh0s+fsVjdJ132Mx65ULyONLbsXzpYGpnFhbDDRz5U+9rCW4xv1/q9nJgfVOsq0tl8uipYW99uga9t9wXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ayXl80aV; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5cec090b2bdso1401195a12.0 for ; Fri, 09 Feb 2024 16:23:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524619; x=1708129419; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XAy3C0x/5qzXDxILuLg04WzqVd2kbP9Lyu9Y6MgUWYE=; b=ayXl80aVsGuFPR9Dcfm6nav31J5xPJwkud6iDgLhW8Y3YfPIFERe/eui3XxhwxaGjW lAly9LDf9RaMMoFcyLeqygzvEn0qwSh16A6Xvr8wWgqFvWgHwdQUeaxvtDIctovefUTR QqpkPRHWi+yjMSeSl2frfrna1D/vHoUYWGGdf7uPqVQ3nsju7h10lSBDPtx0hukOSmo+ daK7gl38m1x8+YF4gXs6H6+INhynUW0IRenACqqMlYWyIqC/dM1hp7h+3kF1xY+ms/d0 evsrARwaB1S/R6W8dV4BbxM/WtkEMq9zqwY0RGVnzGHpfW1FkcMREj0GUjxRiDN5iRg/ v79Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524619; x=1708129419; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XAy3C0x/5qzXDxILuLg04WzqVd2kbP9Lyu9Y6MgUWYE=; b=sncym+C2a9XCaiFCulcKIaTiOB3MCipng+qkkVyNmOoFKags1AnF/h7laXCWCQ4tkq 7uLSMnzIWjMo8s1J3+TTES+bRmgats7C007dGadKDLhtwG1wxkXnZqYu+4+CCDRF1ugW Up8j5J3FxNzyg9+/5A3hOa3hydXPewOi8ZYag07CyIF3FkukhDqh4/ERH65Lwjls7w6v FudMgKNUtuCY6xJGWdq0LLGbTLTjWxT4jcwXkKHOSwkh/jeuzUp4Ay1PT3uNgM9LGH3Y JqUJxAKQtNAVlxan/oeW2bNvR48PrA323wL/O3DturpUSNl+/F+AphPQj15+Cu4FhzJq cnqQ== X-Gm-Message-State: AOJu0YxSKe9HyXh+x36rf4zHdDAYqawjBKVP3xBajvgEjVAAZVjdYWkM agFcP8bbhHjF8yzo7bFTfNcfp5TpPmA8AjaFB7ZY8hiaZ/Xq/alW+hQZ7kvJ17KLwsgZ59nZHCi fgwpq0hcpzu+B7NYU7gov+0KUtLSrEhXGVc1cCPR79xMe+qNrndzk9lX47qMHnDg4SIAJclJIQu gszQujexYfpG+YYo6yYwypEZU7us2gVABld6HI5r2E/JW5 X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a65:6756:0:b0:5ce:8c6:c370 with SMTP id c22-20020a656756000000b005ce08c6c370mr1418pgu.10.1707524619373; Fri, 09 Feb 2024 16:23:39 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:11 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-3-jstultz@google.com> Subject: [PATCH v8 2/7] locking/mutex: Make mutex::wait_lock irq safe From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, "Connor O'Brien" , John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469699343395397 X-GMAIL-MSGID: 1790469699343395397 From: Juri Lelli mutex::wait_lock might be nested under rq->lock. Make it irq safe then. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) [rebase & fix {un,}lock_wait_lock helpers in ww_mutex.h] Signed-off-by: Connor O'Brien Signed-off-by: John Stultz --- v3: * Re-added this patch after it was dropped in v2 which caused lockdep warnings to trip. v7: * Fix function definition for PREEMPT_RT case, as pointed out by Metin Kaya. * Fix incorrect flags handling in PREEMPT_RT case as found by Metin Kaya --- kernel/locking/mutex.c | 18 ++++++++++-------- kernel/locking/ww_mutex.h | 22 +++++++++++----------- 2 files changed, 21 insertions(+), 19 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 980ce630232c..7de72c610c65 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -578,6 +578,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas DEFINE_WAKE_Q(wake_q); struct mutex_waiter waiter; struct ww_mutex *ww; + unsigned long flags; int ret; if (!use_ww_ctx) @@ -620,7 +621,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas return 0; } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); /* * After waiting to acquire the wait_lock, try again. */ @@ -681,7 +682,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas goto err; } - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); /* Make sure we do wakeups before calling schedule */ if (!wake_q_empty(&wake_q)) { wake_up_q(&wake_q); @@ -707,9 +708,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas trace_contention_begin(lock, LCB_F_MUTEX); } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: __set_current_state(TASK_RUNNING); @@ -735,7 +736,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); wake_up_q(&wake_q); preempt_enable(); return 0; @@ -745,7 +746,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas __mutex_remove_waiter(lock, &waiter); err_early_kill: trace_contention_end(lock, ret); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); wake_up_q(&wake_q); @@ -916,6 +917,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne struct task_struct *next = NULL; DEFINE_WAKE_Q(wake_q); unsigned long owner; + unsigned long flags; mutex_release(&lock->dep_map, ip); @@ -943,7 +945,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne } preempt_disable(); - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ @@ -960,7 +962,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne if (owner & MUTEX_FLAG_HANDOFF) __mutex_handoff(lock, next); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); wake_up_q(&wake_q); preempt_enable(); } diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 7189c6631d90..9facc0ddfdd3 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -70,14 +70,14 @@ __ww_mutex_has_waiters(struct mutex *lock) return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS; } -static inline void lock_wait_lock(struct mutex *lock) +static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags) { - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, *flags); } -static inline void unlock_wait_lock(struct mutex *lock) +static inline void unlock_wait_lock(struct mutex *lock, unsigned long *flags) { - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, *flags); } static inline void lockdep_assert_wait_lock_held(struct mutex *lock) @@ -144,14 +144,14 @@ __ww_mutex_has_waiters(struct rt_mutex *lock) return rt_mutex_has_waiters(&lock->rtmutex); } -static inline void lock_wait_lock(struct rt_mutex *lock) +static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags) { - raw_spin_lock(&lock->rtmutex.wait_lock); + raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags); } -static inline void unlock_wait_lock(struct rt_mutex *lock) +static inline void unlock_wait_lock(struct rt_mutex *lock, unsigned long *flags) { - raw_spin_unlock(&lock->rtmutex.wait_lock); + raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, *flags); } static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock) @@ -380,6 +380,7 @@ static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { DEFINE_WAKE_Q(wake_q); + unsigned long flags; ww_mutex_lock_acquired(lock, ctx); @@ -408,10 +409,9 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) * Uh oh, we raced in fastpath, check if any of the waiters need to * die or wound us. */ - lock_wait_lock(&lock->base); + lock_wait_lock(&lock->base, &flags); __ww_mutex_check_waiters(&lock->base, ctx, &wake_q); - unlock_wait_lock(&lock->base); - + unlock_wait_lock(&lock->base, &flags); wake_up_q(&wake_q); } From patchwork Sat Feb 10 00:23:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199163 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1214227dyd; Fri, 9 Feb 2024 16:24:51 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUtVb/mdH1RGTq53oeFJbvAftZu14kWYyTdw5hU1f9TghYE6171T6bpXr6iTSbaifT9dSQYkT9AOLOBSzUnwmpsUsUKaw== X-Google-Smtp-Source: AGHT+IEqyLbPFas9Pi29aYtkcDm1Ke2tFOgcaMhqjVHQnhCUJiYmDrZIt7IFchA/qkFklDlUjya7 X-Received: by 2002:ae9:e90b:0:b0:785:5d68:e163 with SMTP id x11-20020ae9e90b000000b007855d68e163mr775086qkf.9.1707524691424; Fri, 09 Feb 2024 16:24:51 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524691; cv=pass; d=google.com; s=arc-20160816; b=pgslyydhfBJGizaX55/ZbEE9tJPsGQ+W6auhASBTGevPTamQxZWftRY2VwDWqGmR8N 9TlnhJAA900Dpk4FEBWFR3nJRdcDMrO87HqOWRbdwfjY1N4mPOAm4Kwdvu4fVq1TezZZ Hir+LiE3cESZtK8TmDxb/syMuG6W+c7FhQnFuMVgB63Pjy7ma452wix0y6em6QufvRSp qrlKLIIoWg//P4Y2qmdWJMFDqbzl5KO5oV1eKl1inpPhjMNBWO1OWlFh2iVy3dE+Awt5 8hi4w9r9X0nWbHeT4601YF/ZmV5PnUhvoTKrK/V5jJSf8+akG8U3emqPXngWgHf5J/lB IkIw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=z669vXCZHTlic+csHXc7vqHCpJ4DtGvq/gBFclWIjzM=; fh=1+jvhVAheZNfJPi7WI2B++zOGEH/DdEROQj8DiMx83M=; b=rE7ET8PkIHEtFbR8de3GxLsSOkO6QLWhoa7q2ToQ9RuL3AfosNQuj+6P2x1J9srP8c MwZPLEnvN1vsIVmmiHr29n1Ob2D/qR/Ehn3IXsmAQzO/eCg9Ce+7mU48lOz9czrylBPj l9Xbe1a2AXH3fle7xnSB96p4F9g1gnBlKiCpxrVCPrISD9RC1pJ0vKce1dUcaY3gBMmG qVIA66dK2KT4IEYNXHhx8TGBfHqEBZk49KnqNAzvxna2l0nCpkkm+nKcHV4jAwT7NMAl meibfksZ06TJ1chDygYQoL8FRZ/eEtxMjNYArD9e3kc3t8Hs3JalMvfWgusBO4xH/ie+ FR9w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=QnkTUmOt; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60181-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60181-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCXj2uOZa1Zbjo5nsilrOB48VAIFp+fPk0jdWIIDCNnF/eXjHAMcJeKnm5Los77aovg/xDLeYs2ub9C+slqz3LTpEvqdWg== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id b19-20020a05620a271300b0077a0d2652absi741216qkp.759.2024.02.09.16.24.51 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:24:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60181-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=QnkTUmOt; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60181-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60181-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 282101C22843 for ; Sat, 10 Feb 2024 00:24:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D397E7472; Sat, 10 Feb 2024 00:23:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QnkTUmOt" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B44CA23BD for ; Sat, 10 Feb 2024 00:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524624; cv=none; b=TYrb1153gmPO36ni2eCu7W+OwG5LFsMCvTPKi0qF9j2tb2etF5wVAOK8KpBVqoJOL9VieuVOo9o5aOVmA7NL8IX6Fc0tDJkBy6GY8pPGN68Ph3C8nrq6r3yjZ4MHwpK2UYI+Eofz7txUO9S9ehtbHVJ2MgDUqkVuT0SiSbPgdP8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524624; c=relaxed/simple; bh=3DJbBrx4KmSxVmHiBdNSoYKl3Bct4nREpl0U7p4hoZw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gPNBjkNQ0ULE1l1bjy8bnZGzbOMDpvo4qXNQaAO+KslB+146bvZL4N/TB6BuKpVPggri5sbqo67H8B0FrIUlucYdopij36hb+rhjh1ZTxZNQnQD1qElmgM2WNUetqiHb/sTTVLxvBrtldYcpsIGn3wWNl0w9kYWAx+7d/ZkwBhI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QnkTUmOt; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc6b26783b4so2250960276.0 for ; Fri, 09 Feb 2024 16:23:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524621; x=1708129421; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z669vXCZHTlic+csHXc7vqHCpJ4DtGvq/gBFclWIjzM=; b=QnkTUmOtwTGF34K8gZL0Pp6mIiHALad11qn6ji/utJJaSopYNHi8JHiVg9XihGofw5 RjY/imT3L4E0MTxe3ZdqKPsSB+u6lk9DVwPiCAA4f9bmkeHMoHtW+AWWfaHQl9FMxW5X LVr1y1R8EGlMdR8g1PJRdOeiJvF+G4mHUCcu6EFBL4a4cAeUSuZvJnJU6Cm58oxDHcSf 74xPIpM4VNzzLo3fxVPkgvMn0fXDjxLolq0cqKzsuOgUh+I0mm+K8sgO5s7OqoxbaKnv 2mMNUvYR70pGw05EWXCLgv6Gpz/CqW5HF8rQEdJTLr8gztSa575I11VtbGqI8CszGbiM wvTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524621; x=1708129421; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z669vXCZHTlic+csHXc7vqHCpJ4DtGvq/gBFclWIjzM=; b=PvS5WAn9sH+/0jqofkeauVsoQxT7C7i4ZPCF5qv+RTt6KHzAYU9nEvIhKh80AwcUZq 5p4WHyPAcOOYnTpDxa8F2mP/3iyxF4iICYkfB5k90rlApEi2EQTkzEnuWgHSa9abLSZV Ciugr4CmT49mKV6pQTeZcVVLpvlIYIVhh5mkhv6hfIAqNrJtkpygy1GJJFjBAgp5OC/z CUBZcVPEev4jya1ACCJoCyF7nMPk+tSzmcyt+e4ndj7ciqulYWIPim89vvBmHag9A889 34Jy6oEZiJCz4FFSgHGttI3QhJ/qm5lHLJYX0VCXM6jAJLEq/hbPEdrFvYuPNaE034pC SmqA== X-Gm-Message-State: AOJu0Yyp+3d1U1AFt9UpVlVidOTTIShktOPSS4UC6+zDsQ2GZd7dDJO0 FkTs7k3MZ/JGcn0RmPxD+q7fwBUpMlyVzePqgDMTQ1gEcy9V+zyee7Wjv/ZjZtymnLvov7psSMA uydFedUxonCTZ3GP/8EOdkcJ7FVH0s2s6Vw6LDtWk+JPVGItlPZWP4wr3jEbj5YSdQrbWVa1NdL 0G3fMiLztLIQ+UieIXAqDrynPL4NFowGaBhlyyyLbDZ59h X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:72e:b0:dc6:ee66:da6b with SMTP id l14-20020a056902072e00b00dc6ee66da6bmr20067ybt.7.1707524620974; Fri, 09 Feb 2024 16:23:40 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:12 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-4-jstultz@google.com> Subject: [PATCH v8 3/7] locking/mutex: Expose __mutex_owner() From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, Valentin Schneider , "Connor O'Brien" , John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469410614273670 X-GMAIL-MSGID: 1790469410614273670 From: Juri Lelli Implementing proxy execution requires that scheduler code be able to identify the current owner of a mutex. Expose __mutex_owner() for this purpose (alone!). Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Juri Lelli [Removed the EXPORT_SYMBOL] Signed-off-by: Valentin Schneider Signed-off-by: Connor O'Brien [jstultz: Reworked per Peter's suggestions] Signed-off-by: John Stultz --- v4: * Move __mutex_owner() to kernel/locking/mutex.h instead of adding a new globally available accessor function to keep the exposure of this low, along with keeping it an inline function, as suggested by PeterZ --- kernel/locking/mutex.c | 25 ------------------------- kernel/locking/mutex.h | 25 +++++++++++++++++++++++++ 2 files changed, 25 insertions(+), 25 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 7de72c610c65..5741641be914 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -56,31 +56,6 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key) } EXPORT_SYMBOL(__mutex_init); -/* - * @owner: contains: 'struct task_struct *' to the current lock owner, - * NULL means not owned. Since task_struct pointers are aligned at - * at least L1_CACHE_BYTES, we have low bits to store extra state. - * - * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup. - * Bit1 indicates unlock needs to hand the lock to the top-waiter - * Bit2 indicates handoff has been done and we're waiting for pickup. - */ -#define MUTEX_FLAG_WAITERS 0x01 -#define MUTEX_FLAG_HANDOFF 0x02 -#define MUTEX_FLAG_PICKUP 0x04 - -#define MUTEX_FLAGS 0x07 - -/* - * Internal helper function; C doesn't allow us to hide it :/ - * - * DO NOT USE (outside of mutex code). - */ -static inline struct task_struct *__mutex_owner(struct mutex *lock) -{ - return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS); -} - static inline struct task_struct *__owner_task(unsigned long owner) { return (struct task_struct *)(owner & ~MUTEX_FLAGS); diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 0b2a79c4013b..1c7d3d32def8 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -20,6 +20,31 @@ struct mutex_waiter { #endif }; +/* + * @owner: contains: 'struct task_struct *' to the current lock owner, + * NULL means not owned. Since task_struct pointers are aligned at + * at least L1_CACHE_BYTES, we have low bits to store extra state. + * + * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup. + * Bit1 indicates unlock needs to hand the lock to the top-waiter + * Bit2 indicates handoff has been done and we're waiting for pickup. + */ +#define MUTEX_FLAG_WAITERS 0x01 +#define MUTEX_FLAG_HANDOFF 0x02 +#define MUTEX_FLAG_PICKUP 0x04 + +#define MUTEX_FLAGS 0x07 + +/* + * Internal helper function; C doesn't allow us to hide it :/ + * + * DO NOT USE (outside of mutex & scheduler code). + */ +static inline struct task_struct *__mutex_owner(struct mutex *lock) +{ + return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS); +} + #ifdef CONFIG_DEBUG_MUTEXES extern void debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter); From patchwork Sat Feb 10 00:23:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199165 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1214688dyd; Fri, 9 Feb 2024 16:26:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IEtmaUKf5j1NoUi/RgLw6bxGX4V67FVjECW4AeSA+vLptIdZYCiwzU29l3mknuGrL3xBQme X-Received: by 2002:a05:620a:90b:b0:784:da2:5799 with SMTP id v11-20020a05620a090b00b007840da25799mr639556qkv.45.1707524769239; Fri, 09 Feb 2024 16:26:09 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524769; cv=pass; d=google.com; s=arc-20160816; b=veCg0GBtgXUVDZ1V8wnbpyjnppYgBJNZRKD8Hy/DA4sz/nn/mm46T2K79TwNDrt512 Ee1Rj1cQvB5ceWLJiqFOZHcixR/zQ4dLU7me3LBdz+BgWNf74jXyfLmhJ8RCrDhNwxiC Wh8pwy07TIH/EJjJPHwfMlvfcVjjfBQrq4PMLN1ZqcDESfDXfd2g0eCUhzoivyg+sWYo JH0FNhb+Cvnxebt0P22N6yEykRNl7qv9t5DsJ1WLGNOD7K0FeamLcRopXaNfXA+P151H lrvij5L84Ge+2nfpEUmJUjq5mLXDN7kgjJxPGt/4N1n6I5u8C0YEgoaWqYsotb111P9J 72og== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=tHWaFb2dLNabyhWAVRZBb4OwekggJuLpGLrM9aWYM2E=; fh=fa9+nmH6ekqhSli76qly2onsdvAzuNPTxfdAxqs1cNA=; b=usD+A3ocH1yiM1D8ILCKrVUTFfJaPGGdmqudhv3RzSqIkVJGREpaVbnaulBHi5nFr3 mmGadZ0zL1dkxCCnQ1LMHNo+LsvAB5OIZuIFsbknzajjmDvcEi0XUvCr4/ShOA2kjX0+ LcbUtAoYFI1B3gwW4/J++UaiRGcMRDnCpOI8MgtSBpSyEKAB8BrcRjJ7t/s9fcUXcdT5 LaCsG2VO+Bb8BFY/GZMeGlFPugko07ciJc8fOzyls0r0hsPPMY1+Xu3fk4A0XyD0QpET 5c1OXgEuG3hZUNlbANBsgR/XaCN22ToDUeKC2k223xRm0I8mPCa9C8VdtrKAdh7FGRZu z9AA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=B7hylp7P; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60182-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60182-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCWRoOu89MihIGpgH9/3xS6JMz5gOz7g/ZpN78yPumu93MpsAYKCIx+8F5eDQ+ZwQZWpeMUaZ4+gY/xP2xLvKz0aAXVW7A== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id j19-20020a37ef13000000b0078316dbc426si625551qkk.551.2024.02.09.16.26.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:26:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60182-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=B7hylp7P; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60182-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60182-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 763231C22DB8 for ; Sat, 10 Feb 2024 00:25:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CDA625CB5; Sat, 10 Feb 2024 00:23:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B7hylp7P" Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B61E3440E for ; Sat, 10 Feb 2024 00:23:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524626; cv=none; b=sHeH73DpK3FTUesCpXiChthhRSvbptpYZXJQArRP1HANUEHBAVCBnfDvZC8ooTNcPMoSKoHM7QcDtsEFeJcQbbrGvDsIVAIWpqWwsPfyL6hbwRMWLP7/32lOUWMO53RHi0CmRPNsin/sWagrHog0XMYFaXgcR/4m5xpNSB0dfwc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524626; c=relaxed/simple; bh=M0EQp4w0btavUbmp0WUrlEWQstIAdnxSA4WvzN2eeyw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YRcs1/hxpjl3VR2b0x8xAYbL8XZBgY44mMowR1qPk/YhQrK3pM9uxm7emSOwqI4qIMz55lJCWHYpTF+oKB7NPmcqKQu+QFeHS+Wo7tgNXpdIxtadafj4qO3pUqPrEXjQgEUaQbwFfpSXbPm3HnddgRrfBYnhEQcm1vmAvtPqyMQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B7hylp7P; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-604a4923adaso28027877b3.0 for ; Fri, 09 Feb 2024 16:23:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524623; x=1708129423; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tHWaFb2dLNabyhWAVRZBb4OwekggJuLpGLrM9aWYM2E=; b=B7hylp7PkYfyBx8nhneEFcn+/hdJFNH/mza1eHbGu1vSRkxx8qKWk/oQURI6+/7PpD GqkCdTUYQMz1pJEkzi/WRsQ8l1MCjMJ3diz5pCfnmQEQ9qV9CtRxIQnRtypziBAUu7ar fWJS5W4rgDweesYLjMmN45R99Pqd72wbGFiq0AVguFOQisDvqoCoyMFch7/jzCSBJVC0 R9AQ+TSnCSSQPR4prCd6TStJpAfR4k+SQHjn+xhZsyJVGWFL8r0tNhP34eauBqHY5YtH X0goGkzK0DUUyisApfuALWnUsFEHgodKvt8Yvev2OLl6TOU1nseA0dHmHGT+3hEbpYGf vwVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524623; x=1708129423; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tHWaFb2dLNabyhWAVRZBb4OwekggJuLpGLrM9aWYM2E=; b=HsGoCQ0P7ivoWHaUXWqtVOdz0NUYrgorVCfjBgYnGeclZ5/0skbi6VjE7onQ3z2zHj yfy8Ydn3YgeWDs0KH67Q6+z+Qg9pUCGzKkoI0l8nBh+LouYu3B8lt/50IccqeWANu9N8 DbJgOTKkjO2P02UY/wBvvVvyIXUUQZTfjFos/sZyRZKTTHHHcfcoId2bv2CCvr5Nhfk0 58tWyYNPoNIb4p6lMvve/4WzUrBMFK0RKLqgkLYwuMxdETM++zTBYfNc+in5t+P+yxXK jXrOq+kSjpDrPZCVeTDwDBhg3QSwuRqimfmTQyBBRYPU3RvlWyxUJ53jGeJouDD+eZiP mE0Q== X-Gm-Message-State: AOJu0YxjUpm82Si7fD7eApmZBuiaYmFKqFYVuoYe67tCEogylAr7M1YO rFQSLkTHOpuOxk+HFHSuomRrwgHVCIPODtk7WmTmN2WKM8LkRwhAibQhKLsiN0yEAa0tuF+wBt+ DVjQv69hUaFlN+9zjlnBe+Tsh8UCLGQDlkq5mGEqpZQUYgVZ6EvcX+Vfr0uzpsaTg5sjVLelq/s WY89DMjrCQ8evXv9A21npxxwA5qwfNNZCJOo9hG0X3Z5/n X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:1027:b0:dc6:207c:dc93 with SMTP id x7-20020a056902102700b00dc6207cdc93mr29352ybt.2.1707524622932; Fri, 09 Feb 2024 16:23:42 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:13 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-5-jstultz@google.com> Subject: [PATCH v8 4/7] sched: Add do_push_task helper From: John Stultz To: LKML Cc: "Connor O'Brien" , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469492257197420 X-GMAIL-MSGID: 1790469492257197420 From: Connor O'Brien Switch logic that deactivates, sets the task cpu, and reactivates a task on a different rq to use a helper that will be later extended to push entire blocked task chains. This patch was broken out from a larger chain migration patch originally by Connor O'Brien. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Connor O'Brien [jstultz: split out from larger chain migration patch] Signed-off-by: John Stultz --- v8: * Renamed from push_task_chain to do_push_task so it makes more sense without proxy-execution --- kernel/sched/core.c | 4 +--- kernel/sched/deadline.c | 8 ++------ kernel/sched/rt.c | 8 ++------ kernel/sched/sched.h | 9 +++++++++ 4 files changed, 14 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9116bcc90346..ad4748327651 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2714,9 +2714,7 @@ int push_cpu_stop(void *arg) // XXX validate p is still the highest prio task if (task_rq(p) == rq) { - deactivate_task(rq, p, 0); - set_task_cpu(p, lowest_rq->cpu); - activate_task(lowest_rq, p, 0); + do_push_task(rq, lowest_rq, p); resched_curr(lowest_rq); } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index a04a436af8cc..e68d88963e89 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2443,9 +2443,7 @@ static int push_dl_task(struct rq *rq) goto retry; } - deactivate_task(rq, next_task, 0); - set_task_cpu(next_task, later_rq->cpu); - activate_task(later_rq, next_task, 0); + do_push_task(rq, later_rq, next_task); ret = 1; resched_curr(later_rq); @@ -2531,9 +2529,7 @@ static void pull_dl_task(struct rq *this_rq) if (is_migration_disabled(p)) { push_task = get_push_task(src_rq); } else { - deactivate_task(src_rq, p, 0); - set_task_cpu(p, this_cpu); - activate_task(this_rq, p, 0); + do_push_task(src_rq, this_rq, p); dmin = p->dl.deadline; resched = true; } diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 3261b067b67e..dd072d11cc02 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2106,9 +2106,7 @@ static int push_rt_task(struct rq *rq, bool pull) goto retry; } - deactivate_task(rq, next_task, 0); - set_task_cpu(next_task, lowest_rq->cpu); - activate_task(lowest_rq, next_task, 0); + do_push_task(rq, lowest_rq, next_task); resched_curr(lowest_rq); ret = 1; @@ -2379,9 +2377,7 @@ static void pull_rt_task(struct rq *this_rq) if (is_migration_disabled(p)) { push_task = get_push_task(src_rq); } else { - deactivate_task(src_rq, p, 0); - set_task_cpu(p, this_cpu); - activate_task(this_rq, p, 0); + do_push_task(src_rq, this_rq, p); resched = true; } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 001fe047bd5d..6ca83837e0f4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3472,5 +3472,14 @@ static inline void init_sched_mm_cid(struct task_struct *t) { } extern u64 avg_vruntime(struct cfs_rq *cfs_rq); extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); +#ifdef CONFIG_SMP +static inline +void do_push_task(struct rq *rq, struct rq *dst_rq, struct task_struct *task) +{ + deactivate_task(rq, task, 0); + set_task_cpu(task, dst_rq->cpu); + activate_task(dst_rq, task, 0); +} +#endif #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Sat Feb 10 00:23:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199166 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1214793dyd; Fri, 9 Feb 2024 16:26:26 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXmNUyjIyPUCVOM5pe9ozmVNu3ZkqQLWSGVwAAJMFKujCX1qt2gUsJcjhGVL5v3TmaIYZYeZmmp2Mqc3cLzrTg3yOCeIw== X-Google-Smtp-Source: AGHT+IE6iXcMrjcXOUUFu7+uq5ejQNfY7UG3HX7C+bPCWAfe9NF8ugj9LTznWpBby9WGoxKv/FHL X-Received: by 2002:a05:6871:728e:b0:21a:e4e:c7ea with SMTP id mm14-20020a056871728e00b0021a0e4ec7eamr1149816oac.52.1707524785817; Fri, 09 Feb 2024 16:26:25 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524785; cv=pass; d=google.com; s=arc-20160816; b=gu9R+IaZE9C/FBPKTCDXZfSKVW6Jxlgs83ARUStMTbNs1zdpVqQ/pI2Grf/fRpi75Z NjJUthV+Fv4f+bQB/4Af4narKuftCT0DY0Q0CInnjzxwNFYlrW2/PBlgwPjK1bC3f2Xp Di5NfGPNzpXgpprb6kMVNHvQoZoNxABcyKKe/CeOnsP0ghSbnch6a72vxzn9UrkLeB0L 21icUHZEFntjmZZysqViD3ONGvIKqrwONwKGR97s+O/D2rF1l8SMUP6LwMuA8TppQVNP 2KQPpJAC/4HLqYeF4mtM4J5q+eTlJBZMIhS/EIy9Vsd5YjGrHxcZTlCAi9TfAUYoBys+ nKrQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=Qj3M7y9TTYy2rrgwO5I38+m9nXw8PxLjyjvAZlLM9sM=; fh=L0cYtxCu67U67zHWNH2MrGzu5osOBT/q7boV6pctpGI=; b=oMyKzVumqyA3V/2yqanXBzjNTeI1EyGhuO4dz0zlkoDsgknBuT8GJ88XWia54F1U9Y 7Y6qe1sYpwGIjqxVemt4kFV+EMkftma+SYwbcbLMVXgaOCA60x/0xz+thGhxNF90+utJ EoVfd1e2GwxcY/eoQIYMH2gTrcQW0u6wGo6ZrcmZCDjg65hVpoUPk1EsWSO6oamt4v3g Zax3Ueu+djXcgTFBuyQ4EsCjjKoXSdLiRqVq5wl6nGKd9UVHOp8fYnzv96TPWYSYCGXf 8jmXUT2wAvdUsp5lrx6YhddrrvxemcJSG2UX1Qe4dtTcQ+pYQzEZ0A6S9R+CiUgLA9bF CNWA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=lCgHthWA; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60183-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60183-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCWZxoutd/LapnltOxSKboIGddsZiFFSa2LDfTPeoIaovKasg7Aep9tXasDWWLK4ihcsj6ptMKAwkiOfY2xGlsuEqyrYsA== Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id j70-20020a638b49000000b005d9135f4515si2570874pge.335.2024.02.09.16.26.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:26:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60183-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=lCgHthWA; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60183-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60183-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 0F33F28C6CF for ; Sat, 10 Feb 2024 00:25:22 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0209713ACC; Sat, 10 Feb 2024 00:23:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lCgHthWA" Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC80E522F for ; Sat, 10 Feb 2024 00:23:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524627; cv=none; b=nbGpDOxMUHRTwl6vVKb6LFMQsVmejmpGp3GORMYyrUPJjcsCQ+O33lVAFwlHbOa8jFJ3qx713NyLn9/DUBFEUENwNqH3XmnuZFD8Fw3F/FJbZpxH80PEonrpyrIDSC//xZ/vbry8GNI1RQ71lxJqrDqwq2LEhU6rOZxEdVgaMoM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524627; c=relaxed/simple; bh=YfiBEJV6vLoHT6zkXGYNge7GpslC9si2ifJmBAVNUII=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=giD4O4jGoOC910ehSv55hEIC+iyzOY6TJN8DNWPkV4UG5KX+iQAzQPDxHLbvzsIjRU7woIdnk5m9orExFUk32vR0uynZDr6GarKFlc/aIYH2HyYFIT9WVEORQ6Lh8y8evBmw40WOqmC/7TxM+VfyK7d6ZDrRwrQDkf0XC1C+qxY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lCgHthWA; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-60492362802so28520547b3.2 for ; Fri, 09 Feb 2024 16:23:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524625; x=1708129425; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Qj3M7y9TTYy2rrgwO5I38+m9nXw8PxLjyjvAZlLM9sM=; b=lCgHthWAW+CfyBvePuCnYlNqS5v3DJANIwE2ZciE7Wco2c1N86EpZ5BS/tNUj5a257 KLqAM56Vlj5yQHLKyAKr+hHRpgMuygm8XNCGcG2ZTgdM3UOdRE1qT5E1Yx2/5RmVtR0W QUkvNTBVpKf7FCWM+c7Kmo4Qsk/COeGV6KZdhWowpmUu7PxBAauV69FtW+zEanaaWLeu cxCz6iOaK5w9SCssj0aAf+GGXqCSy4XNYe99vfTsOjEiwjqK72mylF0por/VZEZUFdtw ENIMsZCY3jul1cR0htosuP/kyZIOFmvik7qsny8x41frhzbwIHEukh8s4Jp2LIb1AfPW WF0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524625; x=1708129425; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Qj3M7y9TTYy2rrgwO5I38+m9nXw8PxLjyjvAZlLM9sM=; b=rm3cVsex/G6lgNYpSO0y2A08dfyyBN20MMTKCHL3mJgkTaAapsPdbmlhQZU+W3tR+I a9HT++gEoH5DdqYYeSpkv3EAdQGuQfUOueHmRO/WfNriLGPOWfSp6hS95EiTZJuRq9V0 16Qkj8c1bJRJCS7QA0MlzGG1xkVOlWLFua3xqNGLEU8LiuzxGG+nhAV6j2RCzPK7yFKm gGQCkw/ITgb1lkbijICgWqt/SmL19KDroG/Fil0JhaTjii1aamtl6BlEBez9wkGpFy7n +KxMSt66hSU1oI1ZDelH1VvcAeH626nPiyT0PSYR0fRc9cRhbIe02QsCH/Vlv8AApY1E O0iQ== X-Gm-Message-State: AOJu0YyF1lI1mcf1xFN6DGJLbRxP+2gR9NCEIA15tKjXETpg8/dYIV+Q bocL6TGjpoPIFlGmTKRunnMuX7SYkMav+1fg3DD4EIMTxATupicKWMQH4y5tzjHaeeIB4Eeq3qF kxEZ4vzj5Lz6511k6Be90q+xTtyeS9RkIoKWCRM1Jrl/KOb7Ki6okiCpUwUfZf3Q231TBuUfSxp 9B7eVEG5rAdUdir18Hs2jgI1r28e0gTUcQEFPJhlAvKU4O X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a0d:ddd4:0:b0:5fb:455a:df08 with SMTP id g203-20020a0dddd4000000b005fb455adf08mr230563ywe.7.1707524624907; Fri, 09 Feb 2024 16:23:44 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:14 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-6-jstultz@google.com> Subject: [PATCH v8 5/7] sched: Consolidate pick_*_task to task_is_pushable helper From: John Stultz To: LKML Cc: "Connor O'Brien" , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469509815463389 X-GMAIL-MSGID: 1790469509815463389 From: Connor O'Brien This patch consolidates rt and deadline pick_*_task functions to a task_is_pushable() helper This patch was broken out from a larger chain migration patch originally by Connor O'Brien. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Connor O'Brien [jstultz: split out from larger chain migration patch, renamed helper function] Signed-off-by: John Stultz --- v7: * Split from chain migration patch * Renamed function --- kernel/sched/deadline.c | 10 +--------- kernel/sched/rt.c | 11 +---------- kernel/sched/sched.h | 10 ++++++++++ 3 files changed, 12 insertions(+), 19 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e68d88963e89..1b9cdb507498 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2179,14 +2179,6 @@ static void task_fork_dl(struct task_struct *p) /* Only try algorithms three times */ #define DL_MAX_TRIES 3 -static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu) -{ - if (!task_on_cpu(rq, p) && - cpumask_test_cpu(cpu, &p->cpus_mask)) - return 1; - return 0; -} - /* * Return the earliest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise: @@ -2205,7 +2197,7 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu if (next_node) { p = __node_2_pdl(next_node); - if (pick_dl_task(rq, p, cpu)) + if (task_is_pushable(rq, p, cpu) == 1) return p; next_node = rb_next(next_node); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index dd072d11cc02..638e7c158ae4 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1791,15 +1791,6 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p) /* Only try algorithms three times */ #define RT_MAX_TRIES 3 -static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu) -{ - if (!task_on_cpu(rq, p) && - cpumask_test_cpu(cpu, &p->cpus_mask)) - return 1; - - return 0; -} - /* * Return the highest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise @@ -1813,7 +1804,7 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu) return NULL; plist_for_each_entry(p, head, pushable_tasks) { - if (pick_rt_task(rq, p, cpu)) + if (task_is_pushable(rq, p, cpu) == 1) return p; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 6ca83837e0f4..c83e5e0672dc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3480,6 +3480,16 @@ void do_push_task(struct rq *rq, struct rq *dst_rq, struct task_struct *task) set_task_cpu(task, dst_rq->cpu); activate_task(dst_rq, task, 0); } + +static inline +int task_is_pushable(struct rq *rq, struct task_struct *p, int cpu) +{ + if (!task_on_cpu(rq, p) && + cpumask_test_cpu(cpu, &p->cpus_mask)) + return 1; + + return 0; +} #endif #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Sat Feb 10 00:23:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199164 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1214493dyd; Fri, 9 Feb 2024 16:25:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IFRCUGEhafFqscX+H7SW/How97tj1UKCNfwHqzzgQyX0HWNMtQRKNwmGLkF0nYx5iRxnHHQ X-Received: by 2002:a17:907:7295:b0:a3b:fe38:343f with SMTP id dt21-20020a170907729500b00a3bfe38343fmr2619653ejc.29.1707524735284; Fri, 09 Feb 2024 16:25:35 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524735; cv=pass; d=google.com; s=arc-20160816; b=F4Gf4nsn5g0t6NVL2Q1l+C+rO6Cs70Pzan6AFzZ6TS0Bvt36m2KSUEb2U0HZtrtYkQ tD6X0SHcCgJZRCZjMhx9BoQ4/XIig1SHvALOjM73s/mWs4C4rU7t8++TaIc4IkeHWAne //ODY3UdEovA9ciTW9FDe3PKL61rt7nyCVAjbTRChWoTLr6L5Sn0uIgmnSR8Pq4oP0WX ccAQ8wXeyxMRPYAMMTcPBOCV75nJbumJUxTXwlI1sA9MAU1qr/9/dgPtlmDiuvtkIaBe CwA3LyhUwkW+dhCmadCaT56j7t4MEyFEftlID8WwI9aQNYluTj5bVoklflmtCB3wn9Ze HEwg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=nVQs41j45YpWDUG3rQkDZ0HYMJ4B7s1Xw+YLIcCiuX4=; fh=qCLffUmLYKSIg0OZchoLdrE7RIMtqbVgI8lKL9KfpXo=; b=cG4aHHfC6Q+SN9zvDDyb1rcDp6IuKJkyXoWse+AZnHaGcpzAzoFQxL5xmMH+lu2nsz q7ixcJocEIbeim8zM8CLVqM8MyFp4ge5wmoEkNKNMUAhFFkpEL4CmYya15sMwXHTQJdN 5gqOeFWPmc32KdRXuvMAsA2fNT70rN5wS5p6tlmW6luEw2Hbj8XKzOMJS5I9/g857lnw BfMMuqpyKMoPGFWPS27FlAamB3IV8zBxSB94dJUUzaWPRqKUZqgBXBDvJ/pcYghZRkAZ gOWnEnWIZXg27J6NzIzw9Z8jpNZUGDtQutovOURKi4DnyVJmZlUKunIt7ZE31t9KISIc ZHZw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=CSggqmaJ; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60184-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60184-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCXXhIAtq+uKapAHoTs9SZ6hrOy+lJ9MMMyn1ws6URT3j0Gs2L9J5madOu4JXL9Xe3mVEscuaclZlzQLhs5XKImW8reqQw== Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id do11-20020a170906c10b00b00a37d5d85a8dsi1533679ejc.298.2024.02.09.16.25.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:25:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60184-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=CSggqmaJ; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60184-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60184-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id B99151F2AA75 for ; Sat, 10 Feb 2024 00:25:34 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F1302C12E; Sat, 10 Feb 2024 00:23:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CSggqmaJ" Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC29B8F4A for ; Sat, 10 Feb 2024 00:23:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524629; cv=none; b=WIXi/I7d2ca0oqlyJjSkqKetml0u2ijqsCe31oT+j52BmPN6ubFwOudlUhs6MAonClFhiBwAo4grRU6xzgMdUA2eMdkR5vpOVXzm8sOIr6yd9hd5t6r0yopMoPoyNb7/pWwWVYRLbyFt48aojdF57JTrbBbArWu585Mu+RO/bQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524629; c=relaxed/simple; bh=kmDH6T7WV6qaFmXYpUOVLT7kQ2NUoOiS1/tXVBLKpdk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ofGFdKPddh7THS9/63YxksSLOf+lPpDINwJX8scHjtcRpeuzRZ7tlb9FA6vB9knDsS3LGF3r7kVRT24wTfcOlqD4UNzE/qNiK+NgP+idG5dUNUR93oRnHztKLAtRV+hxpKCrwJ3kKUDUZE30Yw69GZamHVVvBiFwDTr7s2YayhU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CSggqmaJ; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-604a52664d9so30414867b3.1 for ; Fri, 09 Feb 2024 16:23:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524627; x=1708129427; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nVQs41j45YpWDUG3rQkDZ0HYMJ4B7s1Xw+YLIcCiuX4=; b=CSggqmaJPga+u/QpYrYbWxufvr+thfGw9IiqVrp7+pncZr+GAS4pp6vvaHCPw+urxn Yiu6bKQn576VjBH0N/HiOWNmZFCC4Dg00QdoUNzAG+o4soaZaW1ewt0c004kGg1eQ4wR AsGLYxxR9sRAJkmajqgfVcakL4zmTWEawhCPEz07Gvn/eA7+i+e27SHB9xjVig5oJxx3 UyQIrh0wTlsm8qSLuen4bs1lkg3Q+/H9fMFWx8MxaGhGqpl9OuiVn9Q9j9eWDJReOLvy 1peW+rwMPhn8t4ysdxFDMKkvHYKK/7X4OOxWaWbGl3smCl3sElemumH6dTAlFJqKZmhJ 4SOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524627; x=1708129427; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nVQs41j45YpWDUG3rQkDZ0HYMJ4B7s1Xw+YLIcCiuX4=; b=iJXujQlDWhuRuMYz13R+rG/yvSbloRC7zjczJcbz95YnZQQQogrisS4vJ0aIPH9S81 DEbLY5NJcjJirpPp62ItcwusagcIhWZr4ln3MM3g1GdHXC9rn2dl74p1I6NRS4o25nnn kb1fJ9L8WyYE51r66NTqaeVpqchMxiabnD8XYLLE8eYAAcDEbnH8PwHiwTbmK8UEpzu2 dj5hMu8AHgrLgsFVuLjNpmq9E/FokLObLwBLHxpwjsVGjnfzcChi9e9CW97JENFZiaQT O6Mh4g8hU4LO0qoq8IkLKH9Be+oLsCIh07jeOkkucd/ZpkljX957U4iuzBoIKDjQtXru Dv4g== X-Gm-Message-State: AOJu0YzkjSU2BIvWUb8Jux4IyEVG1xFrh5H1WClB9wM0K0JDyAdKoye4 SRmNwRAY1JsP52N/rDk/evC3yOWA5QiEvWWx7cXuJxQoFWVen3NevBZRfe7yAT/Xw82iThr50I/ 9wD0LCPv3auzWpcjvEh+KFzDt5bg4/KHEuvlKre5AEuQpE5Mub8gUcdlnfNdEIAggkWzIn+E7KE 5sXVe5q1hgw2azk0MA32PawhpZ6SSW2XITzPDb1yYHPNqd X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:690c:c07:b0:604:42a3:3adc with SMTP id cl7-20020a05690c0c0700b0060442a33adcmr199082ywb.10.1707524626873; Fri, 09 Feb 2024 16:23:46 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:15 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-7-jstultz@google.com> Subject: [PATCH v8 6/7] sched: Split out __schedule() deactivate task logic into a helper From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469456842595155 X-GMAIL-MSGID: 1790469456842595155 As we're going to re-use the deactivation logic, split it into a helper. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: John Stultz --- v6: * Define function as static to avoid "no previous prototype" warnings as Reported-by: kernel test robot v7: * Rename state task_state to be more clear, as suggested by Metin Kaya --- kernel/sched/core.c | 72 +++++++++++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 29 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ad4748327651..b537e5f501ea 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6563,6 +6563,48 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) # define SM_MASK_PREEMPT SM_PREEMPT #endif +/* + * Helper function for __schedule() + * + * If a task does not have signals pending, deactivate it and return true + * Otherwise marks the task's __state as RUNNING and returns false + */ +static bool try_to_deactivate_task(struct rq *rq, struct task_struct *p, + unsigned long task_state) +{ + if (signal_pending_state(task_state, p)) { + WRITE_ONCE(p->__state, TASK_RUNNING); + } else { + p->sched_contributes_to_load = + (task_state & TASK_UNINTERRUPTIBLE) && + !(task_state & TASK_NOLOAD) && + !(task_state & TASK_FROZEN); + + if (p->sched_contributes_to_load) + rq->nr_uninterruptible++; + + /* + * __schedule() ttwu() + * prev_state = prev->state; if (p->on_rq && ...) + * if (prev_state) goto out; + * p->on_rq = 0; smp_acquire__after_ctrl_dep(); + * p->state = TASK_WAKING + * + * Where __schedule() and ttwu() have matching control dependencies. + * + * After this, schedule() must not care about p->state any more. + */ + deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); + + if (p->in_iowait) { + atomic_inc(&rq->nr_iowait); + delayacct_blkio_start(); + } + return true; + } + return false; +} + /* * __schedule() is the main scheduler function. * @@ -6654,35 +6696,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) */ prev_state = READ_ONCE(prev->__state); if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) { - if (signal_pending_state(prev_state, prev)) { - WRITE_ONCE(prev->__state, TASK_RUNNING); - } else { - prev->sched_contributes_to_load = - (prev_state & TASK_UNINTERRUPTIBLE) && - !(prev_state & TASK_NOLOAD) && - !(prev_state & TASK_FROZEN); - - if (prev->sched_contributes_to_load) - rq->nr_uninterruptible++; - - /* - * __schedule() ttwu() - * prev_state = prev->state; if (p->on_rq && ...) - * if (prev_state) goto out; - * p->on_rq = 0; smp_acquire__after_ctrl_dep(); - * p->state = TASK_WAKING - * - * Where __schedule() and ttwu() have matching control dependencies. - * - * After this, schedule() must not care about p->state any more. - */ - deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); - - if (prev->in_iowait) { - atomic_inc(&rq->nr_iowait); - delayacct_blkio_start(); - } - } + try_to_deactivate_task(rq, prev, prev_state); switch_count = &prev->nvcsw; } From patchwork Sat Feb 10 00:23:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 199167 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:50ea:b0:106:860b:bbdd with SMTP id r10csp1214816dyd; Fri, 9 Feb 2024 16:26:29 -0800 (PST) X-Google-Smtp-Source: AGHT+IGkAoVV6v0dihdBHuat6X3L5pF4GDQb7h090hioD7EsXpJgibrXJ9lvWRrJ0eE8pwcGj8jC X-Received: by 2002:a05:620a:468d:b0:785:c1ad:35c8 with SMTP id bq13-20020a05620a468d00b00785c1ad35c8mr821277qkb.0.1707524789729; Fri, 09 Feb 2024 16:26:29 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524789; cv=pass; d=google.com; s=arc-20160816; b=imutP+7ogilWTbn5iN44JeD1EGDcJpD3huRhoOr9YV4RHcxaWhfv9Tc1V4i0EntPAV Ks4XuLheyDPgbAPupXEqu0Ec4Ed+CO7IiEKtINcB+3WktcL9m3UBySdachBY8lgL2Xj9 uq1E6EoUUI+gujMxmwGw/0aag1zHGrGQleQVTIjUc2ZPVl1XqvCp8BmxFkMpLPDwx25W WyGexUCb/t5xMlXPaDwBkRoXjxHN/FxCF8MKrhQyQmz/k0jVYsC3k6AKD6bAABk2CIym PkBvq59VkRQwkJcj4kdl1Jb8PxNi6Ckf3OEyJXo5MOcDQ27/gy5zOSc/XAgnTM6jCwZ9 vHXQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=m22aC1Uv/PkIDPQwmNAxRAUXLLavhsMhUkxM07Lg8+U=; fh=QHDYH58zvxqZiCceVc/DPigU2ghzZZ4YGJU0TCjXDEw=; b=J+2uDXvw0JgGOttP2NnrvmNcO3R9uTU10YGA6VQFOIsQ+UK6d3qGWk6HORnzrAVs7n HEhKHtUnjI3PkDLtu2EW0ho4DcTvp7Xhc2ywwzOsQH6qlG5/Ap43pvXBcRyBkuVfTI+g k3YNoSZ8VNYVQAVXMmtbIBUmzmD4c3dP72aeMJry2BZPelwhl1e6UewYo92KAoAYdRN6 5/xJco/AdDHbkCpQFzRn8dV+jEAiZk+sZXIDSK8zqoC8A1k5j09Tx4kZQLax45uZWmzU 8bY9cwT0dKDJ+q5Ip0djCKvMl4W6ZT6+fi2o7dWn8sy598rwo5HIeUYv1IWFqrO/pbEz FBew==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=GllXT5oR; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60185-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60185-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCV8185nykL32Cy6oG52dSsYJ1y+bcwCUi4BBFN/4H7nrGERhIPN5Fxh2wCtw5XdxQyaQdkHzLGLIBrIIskuGjIqY3XU2g== Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id j19-20020a37ef13000000b0078316dbc426si625551qkk.551.2024.02.09.16.26.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:26:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60185-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=GllXT5oR; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60185-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60185-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 148ED1C26F93 for ; Sat, 10 Feb 2024 00:25:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3479A1DFE5; Sat, 10 Feb 2024 00:23:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GllXT5oR" Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03B85F9D0 for ; Sat, 10 Feb 2024 00:23:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524632; cv=none; b=gcWFkufxR+wLlTPkiiilB3hXDLR1iQ4R+XN2yNhzC38Mr6ylSWL932aCdEgF0JGMIjqXyJ3Yc+5kOiQsBBC7iEGJLkM5R31kHUhlgh4f+4ClgSazBdG4/zDpfJYyHFffcJ4PRYqdTOzyl+IAQWVf5TgzSkEp93D9x9EQMgE0eDA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524632; c=relaxed/simple; bh=J20W2fNQAFLTWdOJOtc5dFWh96+1Cxmd7Bg/MI+cJKA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qw/Lgf91U+8VvLdnoeIB4aYut0CmrnCyok3p+ZyqRHtMbiGPm67uytKHrvKWb/bRUt8KlrKJ2dPsXpMYU8XsmqqOz6yFHJYgEee/qYos470D0d4gIrswv2mFF2KxSA4e4xVRxSy+iP5jLy+Ax1tEGWiiziXgGZbSyuuBcQ3+2h0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GllXT5oR; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc64f63d768so2758152276.2 for ; Fri, 09 Feb 2024 16:23:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524629; x=1708129429; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m22aC1Uv/PkIDPQwmNAxRAUXLLavhsMhUkxM07Lg8+U=; b=GllXT5oRBVqZKfxoFVklPHa1p237gaF8ac4qSf8+pdPWYEnyGrZ3USau9maiFc6wAj irIQMacL1rqlJ+4ty/l3ZBe87+0U/AQsimmrNAMCRzR3Z7G8pdpPLcmtFw/1afRvz1j5 weTgEMN8/UhX99Cws8ybhz8V385dPRJmgo2LJ7alUx6O6vu9qsDFm6fU0qJwu4JcXZCL CMTqu1pKjldyDvhken709daZR3eKmjHKdEhN3m4/Y2RtKdOOJSoQrqBppXuXWMFsMV0X c1dxNlBM4kiUjZAXJHhJDMEJwTXHQbkmDxS66zlCdSEMITu93XwgTk7egZXc+oKsdpls /oDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524629; x=1708129429; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m22aC1Uv/PkIDPQwmNAxRAUXLLavhsMhUkxM07Lg8+U=; b=d460eA4sEvfTrBzq47pJktYnp6HhMNfnNwpvNoaoGObMzXfvdL6RRoreuHW5uPLYIG GmxIv+sJdO8Nmk5nYb0eLYs4bflSgkNylEDZyYBT8oMs8Ud9VD0cuyQ4bZLjRzyZZzb5 8nUnvkbqp1sHthpayf/Ao+vaCBfj/tSacYA9CyJfotuOf56nX+tlCbDm4stSAUiESg4J X66WCWfFJZm21T8vh/aIe30+pab+sdnq+KNGsxZEVNzMsZta4JvTAG/fqmQZXAo1mbF6 04eIGwldSoXCN65mgFOqpMj1BV4uvk47YAcPDPOm/xVYatcSPzZkAU/7ThETKuOh3wOp Fqbw== X-Gm-Message-State: AOJu0YzY0vCf7ABwaR/AsWDLhFcHmB5HSqgF6KhMFRb7//dKrzouofa9 6sj7o/eELbSs3sbTwZH4hlXK4X17Bwwu+U7WZuo6H8pAIvqnZxwxWA0mM1zEP44HWldg06/cPC1 jCZfQgvlocZiVU/xr0/co7iByS2R7ApbrDeh2fQ91eygK/lLGllHzSUDHAdSGN8fqXqJ0gNSXEH HdXzHdKB/L+bWyxyDLT7XnCQxbbLDFTf3BuIFwyFfl+O1q X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:100b:b0:dc6:fa35:b42 with SMTP id w11-20020a056902100b00b00dc6fa350b42mr168771ybt.2.1707524628878; Fri, 09 Feb 2024 16:23:48 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:16 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-8-jstultz@google.com> Subject: [PATCH v8 7/7] sched: Split scheduler and execution contexts From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Xuewen Yan , K Prateek Nayak , Metin Kaya , Thomas Gleixner , kernel-team@android.com, "Connor O'Brien" , John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1790469513657002397 X-GMAIL-MSGID: 1790469513657002397 From: Peter Zijlstra Let's define the scheduling context as all the scheduler state in task_struct for the task selected to run, and the execution context as all state required to actually run the task. Currently both are intertwined in task_struct. We want to logically split these such that we can use the scheduling context of the task selected to be scheduled, but use the execution context of a different task to actually be run. To this purpose, introduce rq_selected() macro to point to the task_struct selected from the runqueue by the scheduler, and will be used for scheduler state, and preserve rq->curr to indicate the execution context of the task that will actually be run. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Metin Kaya Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20181009092434.26221-5-juri.lelli@redhat.com [add additional comments and update more sched_class code to use rq::proxy] Signed-off-by: Connor O'Brien [jstultz: Rebased and resolved minor collisions, reworked to use accessors, tweaked update_curr_common to use rq_proxy fixing rt scheduling issues] Signed-off-by: John Stultz --- v2: * Reworked to use accessors * Fixed update_curr_common to use proxy instead of curr v3: * Tweaked wrapper names * Swapped proxy for selected for clarity v4: * Minor variable name tweaks for readability * Use a macro instead of a inline function and drop other helper functions as suggested by Peter. * Remove verbose comments/questions to avoid review distractions, as suggested by Dietmar v5: * Add CONFIG_PROXY_EXEC option to this patch so the new logic can be tested with this change * Minor fix to grab rq_selected when holding the rq lock v7: * Minor spelling fix and unused argument fixes suggested by Metin Kaya * Switch to curr_selected for consistency, and minor rewording of commit message for clarity * Rename variables selected instead of curr when we're using rq_selected() * Reduce macros in CONFIG_SCHED_PROXY_EXEC ifdef sections, as suggested by Metin Kaya v8: * Use rq->curr, not rq_selected with task_tick, as suggested by Valentin * Minor rework to reorder this with CONFIG_SCHED_PROXY_EXEC patch --- kernel/sched/core.c | 46 ++++++++++++++++++++++++++--------------- kernel/sched/deadline.c | 35 ++++++++++++++++--------------- kernel/sched/fair.c | 18 ++++++++-------- kernel/sched/rt.c | 40 +++++++++++++++++------------------ kernel/sched/sched.h | 25 ++++++++++++++++++++-- 5 files changed, 99 insertions(+), 65 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b537e5f501ea..c17f91d6ceba 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -794,7 +794,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer) rq_lock(rq, &rf); update_rq_clock(rq); - rq->curr->sched_class->task_tick(rq, rq->curr, 1); + rq_selected(rq)->sched_class->task_tick(rq, rq->curr, 1); rq_unlock(rq, &rf); return HRTIMER_NORESTART; @@ -2238,16 +2238,18 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p, void wakeup_preempt(struct rq *rq, struct task_struct *p, int flags) { - if (p->sched_class == rq->curr->sched_class) - rq->curr->sched_class->wakeup_preempt(rq, p, flags); - else if (sched_class_above(p->sched_class, rq->curr->sched_class)) + struct task_struct *selected = rq_selected(rq); + + if (p->sched_class == selected->sched_class) + selected->sched_class->wakeup_preempt(rq, p, flags); + else if (sched_class_above(p->sched_class, selected->sched_class)) resched_curr(rq); /* * A queue event has occurred, and we're going to schedule. In * this case, we can save a useless back to back clock update. */ - if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr)) + if (task_on_rq_queued(selected) && test_tsk_need_resched(rq->curr)) rq_clock_skip_update(rq); } @@ -2774,7 +2776,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) lockdep_assert_held(&p->pi_lock); queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) { /* @@ -5587,7 +5589,7 @@ unsigned long long task_sched_runtime(struct task_struct *p) * project cycles that may never be accounted to this * thread, breaking clock_gettime(). */ - if (task_current(rq, p) && task_on_rq_queued(p)) { + if (task_current_selected(rq, p) && task_on_rq_queued(p)) { prefetch_curr_exec_start(p); update_rq_clock(rq); p->sched_class->update_curr(rq); @@ -5655,7 +5657,8 @@ void scheduler_tick(void) { int cpu = smp_processor_id(); struct rq *rq = cpu_rq(cpu); - struct task_struct *curr = rq->curr; + /* accounting goes to the selected task */ + struct task_struct *selected; struct rq_flags rf; unsigned long thermal_pressure; u64 resched_latency; @@ -5666,16 +5669,17 @@ void scheduler_tick(void) sched_clock_tick(); rq_lock(rq, &rf); + selected = rq_selected(rq); update_rq_clock(rq); thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq)); update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure); - curr->sched_class->task_tick(rq, curr, 0); + selected->sched_class->task_tick(rq, selected, 0); if (sched_feat(LATENCY_WARN)) resched_latency = cpu_resched_latency(rq); calc_global_load_tick(rq); sched_core_tick(rq); - task_tick_mm_cid(rq, curr); + task_tick_mm_cid(rq, selected); rq_unlock(rq, &rf); @@ -5684,8 +5688,8 @@ void scheduler_tick(void) perf_event_task_tick(); - if (curr->flags & PF_WQ_WORKER) - wq_worker_tick(curr); + if (selected->flags & PF_WQ_WORKER) + wq_worker_tick(selected); #ifdef CONFIG_SMP rq->idle_balance = idle_cpu(cpu); @@ -5750,6 +5754,12 @@ static void sched_tick_remote(struct work_struct *work) struct task_struct *curr = rq->curr; if (cpu_online(cpu)) { + /* + * Since this is a remote tick for full dynticks mode, + * we are always sure that there is no proxy (only a + * single task is running). + */ + SCHED_WARN_ON(rq->curr != rq_selected(rq)); update_rq_clock(rq); if (!is_idle_task(curr)) { @@ -6701,6 +6711,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) } next = pick_next_task(rq, prev, &rf); + rq_set_selected(rq, next); clear_tsk_need_resched(prev); clear_preempt_need_resched(); #ifdef CONFIG_SCHED_DEBUG @@ -7201,7 +7212,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) prev_class = p->sched_class; queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flag); if (running) @@ -7291,7 +7302,7 @@ void set_user_nice(struct task_struct *p, long nice) } queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_NOCLOCK); if (running) @@ -7870,7 +7881,7 @@ static int __sched_setscheduler(struct task_struct *p, } queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flags); if (running) @@ -9297,6 +9308,7 @@ void __init init_idle(struct task_struct *idle, int cpu) rcu_read_unlock(); rq->idle = idle; + rq_set_selected(rq, idle); rcu_assign_pointer(rq->curr, idle); idle->on_rq = TASK_ON_RQ_QUEUED; #ifdef CONFIG_SMP @@ -9386,7 +9398,7 @@ void sched_setnuma(struct task_struct *p, int nid) rq = task_rq_lock(p, &rf); queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE); @@ -10491,7 +10503,7 @@ void sched_move_task(struct task_struct *tsk) update_rq_clock(rq); - running = task_current(rq, tsk); + running = task_current_selected(rq, tsk); queued = task_on_rq_queued(tsk); if (queued) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 1b9cdb507498..c30b592d6e9d 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1218,7 +1218,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) #endif enqueue_task_dl(rq, p, ENQUEUE_REPLENISH); - if (dl_task(rq->curr)) + if (dl_task(rq_selected(rq))) wakeup_preempt_dl(rq, p, 0); else resched_curr(rq); @@ -1442,7 +1442,7 @@ void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, */ static void update_curr_dl(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_dl_entity *dl_se = &curr->dl; s64 delta_exec; @@ -1899,7 +1899,7 @@ static int find_later_rq(struct task_struct *task); static int select_task_rq_dl(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *selected; bool select_rq; struct rq *rq; @@ -1910,6 +1910,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) rcu_read_lock(); curr = READ_ONCE(rq->curr); /* unlocked access */ + selected = READ_ONCE(rq_selected(rq)); /* * If we are dealing with a -deadline task, we must @@ -1920,9 +1921,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) * other hand, if it has a shorter deadline, we * try to make it stay here, it might be important. */ - select_rq = unlikely(dl_task(curr)) && + select_rq = unlikely(dl_task(selected)) && (curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &curr->dl)) && + !dl_entity_preempt(&p->dl, &selected->dl)) && p->nr_cpus_allowed > 1; /* @@ -1985,7 +1986,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p) * let's hope p can move out. */ if (rq->curr->nr_cpus_allowed == 1 || - !cpudl_find(&rq->rd->cpudl, rq->curr, NULL)) + !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL)) return; /* @@ -2024,7 +2025,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf) static void wakeup_preempt_dl(struct rq *rq, struct task_struct *p, int flags) { - if (dl_entity_preempt(&p->dl, &rq->curr->dl)) { + if (dl_entity_preempt(&p->dl, &rq_selected(rq)->dl)) { resched_curr(rq); return; } @@ -2034,7 +2035,7 @@ static void wakeup_preempt_dl(struct rq *rq, struct task_struct *p, * In the unlikely case current and p have the same deadline * let us try to decide what's the best thing to do... */ - if ((p->dl.deadline == rq->curr->dl.deadline) && + if ((p->dl.deadline == rq_selected(rq)->dl.deadline) && !test_tsk_need_resched(rq->curr)) check_preempt_equal_dl(rq, p); #endif /* CONFIG_SMP */ @@ -2066,7 +2067,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first) if (!first) return; - if (rq->curr->sched_class != &dl_sched_class) + if (rq_selected(rq)->sched_class != &dl_sched_class) update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0); deadline_queue_push_tasks(rq); @@ -2391,8 +2392,8 @@ static int push_dl_task(struct rq *rq) * can move away, it makes sense to just reschedule * without going further in pushing next_task. */ - if (dl_task(rq->curr) && - dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) && + if (dl_task(rq_selected(rq)) && + dl_time_before(next_task->dl.deadline, rq_selected(rq)->dl.deadline) && rq->curr->nr_cpus_allowed > 1) { resched_curr(rq); return 0; @@ -2515,7 +2516,7 @@ static void pull_dl_task(struct rq *this_rq) * deadline than the current task of its runqueue. */ if (dl_time_before(p->dl.deadline, - src_rq->curr->dl.deadline)) + rq_selected(src_rq)->dl.deadline)) goto skip; if (is_migration_disabled(p)) { @@ -2554,9 +2555,9 @@ static void task_woken_dl(struct rq *rq, struct task_struct *p) if (!task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && - dl_task(rq->curr) && + dl_task(rq_selected(rq)) && (rq->curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &rq->curr->dl))) { + !dl_entity_preempt(&p->dl, &rq_selected(rq)->dl))) { push_dl_tasks(rq); } } @@ -2731,12 +2732,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) return; } - if (rq->curr != p) { + if (rq_selected(rq) != p) { #ifdef CONFIG_SMP if (p->nr_cpus_allowed > 1 && rq->dl.overloaded) deadline_queue_push_tasks(rq); #endif - if (dl_task(rq->curr)) + if (dl_task(rq_selected(rq))) wakeup_preempt_dl(rq, p, 0); else resched_curr(rq); @@ -2765,7 +2766,7 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p, if (!rq->dl.overloaded) deadline_queue_pull_task(rq); - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { /* * If we now have a earlier deadline task than p, * then reschedule, provided p is still on this diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 533547e3c90a..dc342e1fc420 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1140,7 +1140,7 @@ static inline void update_curr_task(struct task_struct *p, s64 delta_exec) */ s64 update_curr_common(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); s64 delta_exec; delta_exec = update_curr_se(rq, &curr->se); @@ -1177,7 +1177,7 @@ static void update_curr(struct cfs_rq *cfs_rq) static void update_curr_fair(struct rq *rq) { - update_curr(cfs_rq_of(&rq->curr->se)); + update_curr(cfs_rq_of(&rq_selected(rq)->se)); } static inline void @@ -6627,7 +6627,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) s64 delta = slice - ran; if (delta < 0) { - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); return; } @@ -6642,7 +6642,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) */ static void hrtick_update(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class) return; @@ -8267,7 +8267,7 @@ static void set_next_buddy(struct sched_entity *se) */ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int wake_flags) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_entity *se = &curr->se, *pse = &p->se; struct cfs_rq *cfs_rq = task_cfs_rq(curr); int cse_is_idle, pse_is_idle; @@ -8298,7 +8298,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int * prevents us from potentially nominating it as a false LAST_BUDDY * below. */ - if (test_tsk_need_resched(curr)) + if (test_tsk_need_resched(rq->curr)) return; /* Idle tasks are by definition preempted by non-idle tasks. */ @@ -9282,7 +9282,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done) * update_load_avg() can call cpufreq_update_util(). Make sure that RT, * DL and IRQ signals have been updated before updating CFS. */ - curr_class = rq->curr->sched_class; + curr_class = rq_selected(rq)->sched_class; thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq)); @@ -12673,7 +12673,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio) * our priority decreased, or if we are not currently running on * this runqueue and our priority is higher than the current's */ - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { if (p->prio > oldprio) resched_curr(rq); } else @@ -12776,7 +12776,7 @@ static void switched_to_fair(struct rq *rq, struct task_struct *p) * kick off the schedule if running, otherwise just see * if we can still preempt the current task. */ - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); else wakeup_preempt(rq, p, 0); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 638e7c158ae4..48fc7a198f1a 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -530,7 +530,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags) static void sched_rt_rq_enqueue(struct rt_rq *rt_rq) { - struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr; + struct task_struct *curr = rq_selected(rq_of_rt_rq(rt_rq)); struct rq *rq = rq_of_rt_rq(rt_rq); struct sched_rt_entity *rt_se; @@ -1000,7 +1000,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq) */ static void update_curr_rt(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_rt_entity *rt_se = &curr->rt; s64 delta_exec; @@ -1543,7 +1543,7 @@ static int find_lowest_rq(struct task_struct *task); static int select_task_rq_rt(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *selected; struct rq *rq; bool test; @@ -1555,6 +1555,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) rcu_read_lock(); curr = READ_ONCE(rq->curr); /* unlocked access */ + selected = READ_ONCE(rq_selected(rq)); /* * If the current task on @p's runqueue is an RT task, then @@ -1583,8 +1584,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) * systems like big.LITTLE. */ test = curr && - unlikely(rt_task(curr)) && - (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio); + unlikely(rt_task(selected)) && + (curr->nr_cpus_allowed < 2 || selected->prio <= p->prio); if (test || !rt_task_fits_capacity(p, cpu)) { int target = find_lowest_rq(p); @@ -1614,12 +1615,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) { - /* - * Current can't be migrated, useless to reschedule, - * let's hope p can move out. - */ if (rq->curr->nr_cpus_allowed == 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL)) return; /* @@ -1662,7 +1659,9 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf) */ static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags) { - if (p->prio < rq->curr->prio) { + struct task_struct *curr = rq_selected(rq); + + if (p->prio < curr->prio) { resched_curr(rq); return; } @@ -1680,7 +1679,7 @@ static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags) * to move current somewhere else, making room for our non-migratable * task. */ - if (p->prio == rq->curr->prio && !test_tsk_need_resched(rq->curr)) + if (p->prio == curr->prio && !test_tsk_need_resched(rq->curr)) check_preempt_equal_prio(rq, p); #endif } @@ -1705,7 +1704,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f * utilization. We only care of the case where we start to schedule a * rt task */ - if (rq->curr->sched_class != &rt_sched_class) + if (rq_selected(rq)->sched_class != &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); rt_queue_push_tasks(rq); @@ -1977,6 +1976,7 @@ static struct task_struct *pick_next_pushable_task(struct rq *rq) BUG_ON(rq->cpu != task_cpu(p)); BUG_ON(task_current(rq, p)); + BUG_ON(task_current_selected(rq, p)); BUG_ON(p->nr_cpus_allowed <= 1); BUG_ON(!task_on_rq_queued(p)); @@ -2009,7 +2009,7 @@ static int push_rt_task(struct rq *rq, bool pull) * higher priority than current. If that's the case * just reschedule current. */ - if (unlikely(next_task->prio < rq->curr->prio)) { + if (unlikely(next_task->prio < rq_selected(rq)->prio)) { resched_curr(rq); return 0; } @@ -2362,7 +2362,7 @@ static void pull_rt_task(struct rq *this_rq) * p if it is lower in priority than the * current task on the run queue */ - if (p->prio < src_rq->curr->prio) + if (p->prio < rq_selected(src_rq)->prio) goto skip; if (is_migration_disabled(p)) { @@ -2404,9 +2404,9 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p) bool need_to_push = !task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && - (dl_task(rq->curr) || rt_task(rq->curr)) && + (dl_task(rq_selected(rq)) || rt_task(rq_selected(rq))) && (rq->curr->nr_cpus_allowed < 2 || - rq->curr->prio <= p->prio); + rq_selected(rq)->prio <= p->prio); if (need_to_push) push_rt_tasks(rq); @@ -2490,7 +2490,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p) if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) rt_queue_push_tasks(rq); #endif /* CONFIG_SMP */ - if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq))) + if (p->prio < rq_selected(rq)->prio && cpu_online(cpu_of(rq))) resched_curr(rq); } } @@ -2505,7 +2505,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) if (!task_on_rq_queued(p)) return; - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { #ifdef CONFIG_SMP /* * If our priority decreases while running, we @@ -2531,7 +2531,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) * greater than the current running task * then reschedule. */ - if (p->prio < rq->curr->prio) + if (p->prio < rq_selected(rq)->prio) resched_curr(rq); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c83e5e0672dc..808d6ee8ae33 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1030,7 +1030,7 @@ struct rq { */ unsigned int nr_uninterruptible; - struct task_struct __rcu *curr; + struct task_struct __rcu *curr; /* Execution context */ struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; @@ -1225,6 +1225,13 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) +/* For now, rq_selected == rq->curr */ +#define rq_selected(rq) ((rq)->curr) +static inline void rq_set_selected(struct rq *rq, struct task_struct *t) +{ + /* Do nothing */ +} + struct sched_group; #ifdef CONFIG_SCHED_CORE static inline struct cpumask *sched_group_span(struct sched_group *sg); @@ -2148,11 +2155,25 @@ static inline u64 global_rt_runtime(void) return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC; } +/* + * Is p the current execution context? + */ static inline int task_current(struct rq *rq, struct task_struct *p) { return rq->curr == p; } +/* + * Is p the current scheduling context? + * + * Note that it might be the current execution context at the same time if + * rq->curr == rq_selected() == p. + */ +static inline int task_current_selected(struct rq *rq, struct task_struct *p) +{ + return rq_selected(rq) == p; +} + static inline int task_on_cpu(struct rq *rq, struct task_struct *p) { #ifdef CONFIG_SMP @@ -2322,7 +2343,7 @@ struct sched_class { static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { - WARN_ON_ONCE(rq->curr != prev); + WARN_ON_ONCE(rq_selected(rq) != prev); prev->sched_class->put_prev_task(rq, prev); }