From patchwork Sat Feb 24 00:11:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205726 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913122dyb; Fri, 23 Feb 2024 16:12:43 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUCoMsWqiCjMm0OBvO1L29wjfOSnkymtuPImwpDAtaQjuGdH48aLsrPHWg9YAtNOFgKZ+FN7Gj4eaxQLb1fAvR9boTlSA== X-Google-Smtp-Source: AGHT+IFYZFrtti7zr2pvOCvrPksDDjzFAU4xr/ZbpYBTR2hm7Al745rqx4pDWIV+WBdTeDrvANo6 X-Received: by 2002:a17:906:e245:b0:a3e:5ae8:5777 with SMTP id gq5-20020a170906e24500b00a3e5ae85777mr827267ejb.44.1708733562999; Fri, 23 Feb 2024 16:12:42 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733562; cv=pass; d=google.com; s=arc-20160816; b=kCjqYrmhE4KZMnepqtYSC4f/KPCTwe4esD34+4c0rubH3P4OHuOJhgsO5sHMtRTIb6 pABLNch4e8c7eT4VRPziGQWq0X0TpVmsbuasaiqyZpBguVV1VZ218eIXhTvD88Zwo5yP BYCtuyRYKTp+T4NriID+x5utBbH1TsIFim4+6C2QE8K6Bna6dhdEIhrGNQ0eYuRxYq9o N4xthr7DQL9coamPP+cYCPmdz/ERbAuTHU4VWa5H/x7BPlfoMJrDowv58Fi6rkd1loS6 ptEbjt4/W9pkSTDTcBMSIbRnJTrHFih92RwBpyvAJEM16pb/paU7sioeOgTt9BZCp6ZR XPmA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=hs1I9rbN5AcU1gAYTny4S8xetbtql9OHcOOsuaUqj6A=; fh=a3XPpEFBUhRYhB1kbfabx3zf5ZJJ6IHJZ39bL8PTgfU=; b=nxKRBO4+gRhMVYT1IM1XE7Qj8HnXhAKGofrwERx3+sobQhqzyBaJWjvP3qn2do6i5f DuBPbKB8EJlaKuhqD0LwbqySNDOjr2yOSz3kVKPbFkOUgbPIMRWYpgbcP1R85ywgcED1 QdwaMuCoN9/9MOTPCFsB5LdQFtIdd/gbMUrOluVAQaCoFqVIbZUO0/cBILN8MngG6Tor LVOObyZtDVszFu9BBvbdyiNLMwQonF9Cr+fZaVEGaRgg6LpmZ+tzitblRjxvsbH8CbqE C4RIETyx901ZziTOadE8V0J8xULZvc8+hLAe2F7kslrRaSr5vwjMew+F2CrYl4MXh8vm d3Pw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=RgviO6xw; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79344-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79344-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id c12-20020a170906528c00b00a3f45198767si61250ejm.229.2024.02.23.16.12.42 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:12:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79344-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=RgviO6xw; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79344-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79344-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 74FCB1F2509D for ; Sat, 24 Feb 2024 00:12:42 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7B602440C; Sat, 24 Feb 2024 00:12:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RgviO6xw" Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E580389 for ; Sat, 24 Feb 2024 00:12:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733526; cv=none; b=WuSTRdL1pS1OlVZEE/8IGI4Zop6sd0F9T7IHByl4XLjPkEpryJfnNVE+Nc4z42oUEkO+YIrcFbAp41gloYiBHGl91cG7DoAwMZ9r2J4e5cvxfMelzGpdja0UTxtyZm0hyVnauq1ovRvZbPIdLvjePiTIp7WhGzHtz/FCACAFrg0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733526; c=relaxed/simple; bh=58DJLnjoiq2m43wgmF+F1x8qBHTDkkIj0MoqsjUepDc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=D44zCKMlfMD1tgGc922xDTWNxOHXI5QrGOpmyH4zuT2HXoVFP2M1FV4xAa1veoVTGttHojYbEmbPunIAjB2yB1bkaYfLsFIO6iXHjsdss9jkekUGQP7y8BCgKJj1tkKIBV3965rwCE2SLjW3xn8BRCTC8b1fxkBo1CZ+Jgh8X/E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RgviO6xw; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-1dbe41ad98cso13522565ad.3 for ; Fri, 23 Feb 2024 16:12:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733524; x=1709338324; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hs1I9rbN5AcU1gAYTny4S8xetbtql9OHcOOsuaUqj6A=; b=RgviO6xwmWwcKqwL/X+ioX+aOtSJoqM6HOJ/KiNUMCBtj2vZTmZizPuwENeidnUnGc m/+3AL+7P8OAzqSTbj0zqUoyxhliYf3FI+tSi0au+8XZnEAnQpeH5iMtJolQ1nf9Uf9d Abd9TSGemPIh5Tik0GHd9QiQB4ewzBJY6eiGBOiSNyRW9feXmIpS7ZgcC3J9pq1iHL8h doo9WcB8u2xh2m6p/E0oFNWJd6Cttp485GQJ/kwjQgqkbn5W4nbSR7nrICbDDXE0bd8W VFiZmYjcYETBlxYyrucmcmTy3/+gA19AFuhWk4LNHbDcDPvi1SHyblahslToQ73da0Va 2gfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733524; x=1709338324; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hs1I9rbN5AcU1gAYTny4S8xetbtql9OHcOOsuaUqj6A=; b=WbzjR3IQceFC8OrkGjP9XCso7pz64UkGNEVM4vQ+skUZGbnWpVPRPR9GS1nVsUKc0y bOtxleir0RtqbztKq+liQEvi4pyHNxlmIt0YZQH/wNr956WjWRP/SoqFMRxbXEXH2YBf gwvIMT5JOv8/bC1QWUNHJ7eflLTGM1rskwNKfka3UUYoPyyAUCTOQDM+mvlxg0OQP1Vg KMwmNgaBhx0ZSiEr4Ta/L/MllleVyjbRnKeZmOjBlxRNLOzeHpWLkLRFzDwDNVJknYzh VJhci4tKKsgjce1lMaZMHHkoOPnxdwnjSqKD6mqhDN9EKTmptzre447lOiLieyAdxtqE 3j3g== X-Gm-Message-State: AOJu0YxfAQu0yY0QzBoThEsxLH8Jrmf4r8cvtKr8KR1MNl6bgb52SKnh WMPR7F2uvGHSGnRtOyUrbTSE24FsQqi9eL4myQBFL1K6+Erk5xQs6i3z9deLpeeNJImwmkkkS2r qRLQxrpKnHEgFVZQf2O06TedMsO/d+z7awapnp758TfDQ7ZJcSW0hIgaVLGKTVneyNj8NMouCAA 8XbrZ3ZAqCNIiiZh+O5RpPs+TphMP17U2hGzziAh8Sn2Ig X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a17:902:d4c3:b0:1dc:3273:7f75 with SMTP id o3-20020a170902d4c300b001dc32737f75mr44955plg.3.1708733523613; Fri, 23 Feb 2024 16:12:03 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:41 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-2-jstultz@google.com> Subject: [RESEND][PATCH v8 1/7] locking/mutex: Remove wakeups from under mutex::wait_lock From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737004811406017 X-GMAIL-MSGID: 1791737004811406017 From: Peter Zijlstra In preparation to nest mutex::wait_lock under rq::lock we need to remove wakeups from under it. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) [Heavily changed after 55f036ca7e74 ("locking: WW mutex cleanup") and 08295b3b5bee ("locking: Implement an algorithm choice for Wound-Wait mutexes")] Signed-off-by: Juri Lelli [jstultz: rebased to mainline, added extra wake_up_q & init to avoid hangs, similar to Connor's rework of this patch] Signed-off-by: John Stultz --- v5: * Reverted back to an earlier version of this patch to undo the change that kept the wake_q in the ctx structure, as that broke the rule that the wake_q must always be on the stack, as its not safe for concurrency. v6: * Made tweaks suggested by Waiman Long v7: * Fixups to pass wake_qs down for PREEMPT_RT logic --- kernel/locking/mutex.c | 17 +++++++++++++---- kernel/locking/rtmutex.c | 26 +++++++++++++++++--------- kernel/locking/rwbase_rt.c | 4 +++- kernel/locking/rwsem.c | 4 ++-- kernel/locking/spinlock_rt.c | 3 ++- kernel/locking/ww_mutex.h | 29 ++++++++++++++++++----------- 6 files changed, 55 insertions(+), 28 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index cbae8c0b89ab..980ce630232c 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -575,6 +575,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas struct lockdep_map *nest_lock, unsigned long ip, struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) { + DEFINE_WAKE_Q(wake_q); struct mutex_waiter waiter; struct ww_mutex *ww; int ret; @@ -625,7 +626,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas */ if (__mutex_trylock(lock)) { if (ww_ctx) - __ww_mutex_check_waiters(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx, &wake_q); goto skip_wait; } @@ -645,7 +646,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas * Add in stamp order, waking up waiters that must kill * themselves. */ - ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx); + ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx, &wake_q); if (ret) goto err_early_kill; } @@ -681,6 +682,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas } raw_spin_unlock(&lock->wait_lock); + /* Make sure we do wakeups before calling schedule */ + if (!wake_q_empty(&wake_q)) { + wake_up_q(&wake_q); + wake_q_init(&wake_q); + } schedule_preempt_disabled(); first = __mutex_waiter_is_first(lock, &waiter); @@ -714,7 +720,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas */ if (!ww_ctx->is_wait_die && !__mutex_waiter_is_first(lock, &waiter)) - __ww_mutex_check_waiters(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx, &wake_q); } __mutex_remove_waiter(lock, &waiter); @@ -730,6 +736,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas ww_mutex_lock_acquired(ww, ww_ctx); raw_spin_unlock(&lock->wait_lock); + wake_up_q(&wake_q); preempt_enable(); return 0; @@ -741,6 +748,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas raw_spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); + wake_up_q(&wake_q); preempt_enable(); return ret; } @@ -934,6 +942,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne } } + preempt_disable(); raw_spin_lock(&lock->wait_lock); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { @@ -952,8 +961,8 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne __mutex_handoff(lock, next); raw_spin_unlock(&lock->wait_lock); - wake_up_q(&wake_q); + preempt_enable(); } #ifndef CONFIG_DEBUG_LOCK_ALLOC diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 4a10e8c16fd2..eaac8b196a69 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -34,13 +34,15 @@ static inline int __ww_mutex_add_waiter(struct rt_mutex_waiter *waiter, struct rt_mutex *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { return 0; } static inline void __ww_mutex_check_waiters(struct rt_mutex *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { } @@ -1206,6 +1208,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, struct rt_mutex_waiter *top_waiter = waiter; struct rt_mutex_base *next_lock; int chain_walk = 0, res; + DEFINE_WAKE_Q(wake_q); lockdep_assert_held(&lock->wait_lock); @@ -1244,7 +1247,8 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, /* Check whether the waiter should back out immediately */ rtm = container_of(lock, struct rt_mutex, rtmutex); - res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx); + res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx, &wake_q); + wake_up_q(&wake_q); if (res) { raw_spin_lock(&task->pi_lock); rt_mutex_dequeue(lock, waiter); @@ -1677,7 +1681,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, unsigned int state, enum rtmutex_chainwalk chwalk, - struct rt_mutex_waiter *waiter) + struct rt_mutex_waiter *waiter, + struct wake_q_head *wake_q) { struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex); struct ww_mutex *ww = ww_container_of(rtm); @@ -1688,7 +1693,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, /* Try to acquire the lock again: */ if (try_to_take_rt_mutex(lock, current, NULL)) { if (build_ww_mutex() && ww_ctx) { - __ww_mutex_check_waiters(rtm, ww_ctx); + __ww_mutex_check_waiters(rtm, ww_ctx, wake_q); ww_mutex_lock_acquired(ww, ww_ctx); } return 0; @@ -1706,7 +1711,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, /* acquired the lock */ if (build_ww_mutex() && ww_ctx) { if (!ww_ctx->is_wait_die) - __ww_mutex_check_waiters(rtm, ww_ctx); + __ww_mutex_check_waiters(rtm, ww_ctx, wake_q); ww_mutex_lock_acquired(ww, ww_ctx); } } else { @@ -1728,7 +1733,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, - unsigned int state) + unsigned int state, + struct wake_q_head *wake_q) { struct rt_mutex_waiter waiter; int ret; @@ -1737,7 +1743,7 @@ static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock, waiter.ww_ctx = ww_ctx; ret = __rt_mutex_slowlock(lock, ww_ctx, state, RT_MUTEX_MIN_CHAINWALK, - &waiter); + &waiter, wake_q); debug_rt_mutex_free_waiter(&waiter); return ret; @@ -1753,6 +1759,7 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, unsigned int state) { + DEFINE_WAKE_Q(wake_q); unsigned long flags; int ret; @@ -1774,8 +1781,9 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, * irqsave/restore variants. */ raw_spin_lock_irqsave(&lock->wait_lock, flags); - ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state); + ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state, &wake_q); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + wake_up_q(&wake_q); rt_mutex_post_schedule(); return ret; diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 34a59569db6b..e9d2f38b70f3 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -69,6 +69,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, unsigned int state) { struct rt_mutex_base *rtm = &rwb->rtmutex; + DEFINE_WAKE_Q(wake_q); int ret; rwbase_pre_schedule(); @@ -110,7 +111,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, * For rwlocks this returns 0 unconditionally, so the below * !ret conditionals are optimized out. */ - ret = rwbase_rtmutex_slowlock_locked(rtm, state); + ret = rwbase_rtmutex_slowlock_locked(rtm, state, &wake_q); /* * On success the rtmutex is held, so there can't be a writer @@ -122,6 +123,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, if (!ret) atomic_inc(&rwb->readers); raw_spin_unlock_irq(&rtm->wait_lock); + wake_up_q(&wake_q); if (!ret) rwbase_rtmutex_unlock(rtm); diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index 2340b6d90ec6..74ebb2915d63 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1415,8 +1415,8 @@ static inline void __downgrade_write(struct rw_semaphore *sem) #define rwbase_rtmutex_lock_state(rtm, state) \ __rt_mutex_lock(rtm, state) -#define rwbase_rtmutex_slowlock_locked(rtm, state) \ - __rt_mutex_slowlock_locked(rtm, NULL, state) +#define rwbase_rtmutex_slowlock_locked(rtm, state, wq) \ + __rt_mutex_slowlock_locked(rtm, NULL, state, wq) #define rwbase_rtmutex_unlock(rtm) \ __rt_mutex_unlock(rtm) diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 38e292454fcc..fb1810a14c9d 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -162,7 +162,8 @@ rwbase_rtmutex_lock_state(struct rt_mutex_base *rtm, unsigned int state) } static __always_inline int -rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state) +rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state, + struct wake_q_head *wake_q) { rtlock_slowlock_locked(rtm); return 0; diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 3ad2cc4823e5..7189c6631d90 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -275,7 +275,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) */ static bool __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, struct wake_q_head *wake_q) { if (!ww_ctx->is_wait_die) return false; @@ -284,7 +284,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, #ifndef WW_RT debug_mutex_wake_waiter(lock, waiter); #endif - wake_up_process(waiter->task); + wake_q_add(wake_q, waiter->task); } return true; @@ -299,7 +299,8 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, */ static bool __ww_mutex_wound(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx, - struct ww_acquire_ctx *hold_ctx) + struct ww_acquire_ctx *hold_ctx, + struct wake_q_head *wake_q) { struct task_struct *owner = __ww_mutex_owner(lock); @@ -331,7 +332,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * wakeup pending to re-read the wounded state. */ if (owner != current) - wake_up_process(owner); + wake_q_add(wake_q, owner); return true; } @@ -352,7 +353,8 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * The current task must not be on the wait list. */ static void -__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) +__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { struct MUTEX_WAITER *cur; @@ -364,8 +366,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) if (!cur->ww_ctx) continue; - if (__ww_mutex_die(lock, cur, ww_ctx) || - __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) + if (__ww_mutex_die(lock, cur, ww_ctx, wake_q) || + __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx, wake_q)) break; } } @@ -377,6 +379,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { + DEFINE_WAKE_Q(wake_q); + ww_mutex_lock_acquired(lock, ctx); /* @@ -405,8 +409,10 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) * die or wound us. */ lock_wait_lock(&lock->base); - __ww_mutex_check_waiters(&lock->base, ctx); + __ww_mutex_check_waiters(&lock->base, ctx, &wake_q); unlock_wait_lock(&lock->base); + + wake_up_q(&wake_q); } static __always_inline int @@ -488,7 +494,8 @@ __ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter, static inline int __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, struct MUTEX *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { struct MUTEX_WAITER *cur, *pos = NULL; bool is_wait_die; @@ -532,7 +539,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, pos = cur; /* Wait-Die: ensure younger waiters die. */ - __ww_mutex_die(lock, cur, ww_ctx); + __ww_mutex_die(lock, cur, ww_ctx, wake_q); } __ww_waiter_add(lock, waiter, pos); @@ -550,7 +557,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, * such that either we or the fastpath will wound @ww->ctx. */ smp_mb(); - __ww_mutex_wound(lock, ww_ctx, ww->ctx); + __ww_mutex_wound(lock, ww_ctx, ww->ctx, wake_q); } return 0; From patchwork Sat Feb 24 00:11:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205727 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913166dyb; Fri, 23 Feb 2024 16:12:51 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVfNCsWY5y0cIuYw/XZkLCCk3jvF81A8qRmZfKJrmY02Frgn9LN/rdzjEV6lVsh/sUarKjS334PTEEVlNLkIS1pg5T44A== X-Google-Smtp-Source: AGHT+IE/koKzoUJORYOuLvEHZG7Y3M9sqHRPYFdjhUJhNpdpl0xQ7Aj4S2AAcJ4Yuy1SCjUZGe+i X-Received: by 2002:a05:6214:590c:b0:68f:e924:abf9 with SMTP id qo12-20020a056214590c00b0068fe924abf9mr343381qvb.14.1708733571075; Fri, 23 Feb 2024 16:12:51 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733571; cv=pass; d=google.com; s=arc-20160816; b=ctBKPsBozXJ4ftdzeWKZQX4xryvtc45aR3y2KT9QI0fisKKiZlt688zkoiFWB9kvV8 G7T/zr8nY687cUzCb5v1cchgaeUECjRE3LM+XOJcjDuRs1LlCs0whGFd0v2LxoI87Di2 XzpGm02r04SPevLpwK1tcDGvdLSQR5YBuvQqkoGhyPD89Bm8gOf+W8/leFb7a2lnRnXT VQSGEa/GMZ+QbcXFzzcdEs48sANPWN5OabtBOy/zccsxRS9vqIrhD2YdcUna6Qq/423B ix8MfNV7FysHBAnhs0LHrwBKmlmbWGPRdmSt4jy8ydcOz1daDuFX9KqhCPi+kKrw4eS5 paJQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=P7uRkBrp5D0ycCR7GGgbt2+AdIuMwGbcmvSxgEtoIHY=; fh=y8KJUBnrwQA+5PtWX4i2qO6O4Uc2yYtIrVYD0yjhUrk=; b=IcSzYoDixUuMjnEyu015VYbxw0w2TKED+5YQoGPgpatNRBpTTOwvGpN2zI9V7itwco cC7RxHxiqv/xzECzkVzOggmtvHjFBeVbaUdGaMEeOFKHnJu8dnhj9dLkzz5LGEugnZIg Nw+CuOW4S2K95Z8JIDitmf0VlKcIXdihdv5AeJZDta5rWMw4Z8lWcoXSFgD1CcNtey/X QiID4J6FP6WZNAI6FLXlMLTKCWGg/oQIteFnGCu3KkNqXd9fyKXl7A4mUIAqNvCEi0NJ PCDGdFmbm+vlhQhDEco4uCMlOoyXhy8yB6d/FO9cUCazIBfDrb4Ro3ay5pob9Bl2uusa BaLw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=VSg0Bp0x; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79345-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79345-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id i12-20020ad45c6c000000b0068fe19a8a87si55720qvh.206.2024.02.23.16.12.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:12:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79345-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=VSg0Bp0x; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79345-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79345-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id CC4DE1C226A0 for ; Sat, 24 Feb 2024 00:12:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A7B7B4A12; Sat, 24 Feb 2024 00:12:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VSg0Bp0x" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 036D510F7 for ; Sat, 24 Feb 2024 00:12:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733528; cv=none; b=i9F/VTx4FDcTziP5Ifhs5+O8ozuJKY8mXY7RESoSZ8ywYRxmOUH6Acj7GlE7BJIpOdhYEDlZmClrg9rmTiOdCz21dkDbU81d/dj7O+mbyggmg0o3wM793NHC8nRcqHgVgfK/qWzPqok/zYg4vOtaOWox6GJqX+sFrm9xPO91Zok= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733528; c=relaxed/simple; bh=rYk1j9lQV9U32pi2Qzy26X18CZVg7Rn3Bx0pu2WFfb4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dmaeeMcHStq2LYjVvZCj8jKliWRmoxbjCWjt9tLbazvq6KgC/8w5FY7/i/dNAvV9ajTRzpX/jF9ShsLwoGuznE8LtgVLpO3Al5NJR9BF1OJX2BP806AQbA4i77SAStmOyE7nmZIpRNMMX3PgdEu9YM1jkuWiG3GyMWO8PFTkkPY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VSg0Bp0x; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dcbfe1a42a4so2585676276.2 for ; Fri, 23 Feb 2024 16:12:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733526; x=1709338326; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P7uRkBrp5D0ycCR7GGgbt2+AdIuMwGbcmvSxgEtoIHY=; b=VSg0Bp0xADxJP6YSfjN6wXCAvdlCjAP0WnFEWlLUEHkaL3Wd4dn7QkPXJtwAbnPMsp FOKpKpHWu1ODEM+RjxVUN9k7VchwIxWU9cEw1ndUibVrYa9uG20zRBxtLAz0nSrMZUTS awkEJdxIjxafCe13xXNW0MunYhYgtjIOQHPFUEijsXeU0A83JC9xQ90dgXJIY4NZYjyJ 2Y5CTf9c4Ny8NHzqYOdaA/s7iYfr+NrDNJJvwqwYpZxg+hODD0XE0kuRnhA+objQzmtz BZz+4YNuu9TyMmDVuUr3Dspfww9MJ1x39uDoDZQ/wJiThgv+f7Un4N4yuLVePO3jHQFP p+NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733526; x=1709338326; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P7uRkBrp5D0ycCR7GGgbt2+AdIuMwGbcmvSxgEtoIHY=; b=or0RoymXqOFhf8rbtvk6gXl/KD35mXG30GdyK3HYgJmpnCcHcC0QGQAA2L1kKMCTtk HiGvQgO68/kEzP4NqcaLiY+VWLYcoogVtuEFL8RCPWD2+HUHfB+mDHofmCpl5I/PyA1n XkRQWPKzDLO0AWjTcNseaWcy/f7g0BqX84M9jmHVjVPkmReOup8xHtwbKscikhTQ+nps SOubz+vCN6VpkEJ+l6bETqmzGPb/mCqUpniR0fU9vm/0ZeO3ALNLcUHFw+8TfRLFuo9L 002bmpAgg1mx3Cn9jjmINJzeMTlNVOpmYq6zd5y0TCUsScHycRUoTet++P5BQ0kbA6gX 0xDw== X-Gm-Message-State: AOJu0YzxPc3C/r0nFY+V69Zy+H9hLeoYnBjy/vQ54fz5ZPWy2WNhP0fD iIzPG2SBU3RSgrvxVDuG4ezsGnMBWZcDwLFP1UCtmv/NQ/H/NQ5LetpBXPziHTj3CsXWKXVf6DU bX7wPQ4vlRLPOmM7oGGERvP4+rd2SZ3IHr8FG12MeJq3T7kUF7567cIt0SnCnFULCnW2guDZO8o ya0i3qzRbXkRbk7N0U1MQXM52ueek3GF/QjzcnR8FTQ5ra X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:188a:b0:dcc:6065:2b3d with SMTP id cj10-20020a056902188a00b00dcc60652b3dmr364336ybb.8.1708733525779; Fri, 23 Feb 2024 16:12:05 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:42 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-3-jstultz@google.com> Subject: [RESEND][PATCH v8 2/7] locking/mutex: Make mutex::wait_lock irq safe From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, "Connor O'Brien" , John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737012616418809 X-GMAIL-MSGID: 1791737012616418809 From: Juri Lelli mutex::wait_lock might be nested under rq->lock. Make it irq safe then. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) [rebase & fix {un,}lock_wait_lock helpers in ww_mutex.h] Signed-off-by: Connor O'Brien Signed-off-by: John Stultz --- v3: * Re-added this patch after it was dropped in v2 which caused lockdep warnings to trip. v7: * Fix function definition for PREEMPT_RT case, as pointed out by Metin Kaya. * Fix incorrect flags handling in PREEMPT_RT case as found by Metin Kaya --- kernel/locking/mutex.c | 18 ++++++++++-------- kernel/locking/ww_mutex.h | 22 +++++++++++----------- 2 files changed, 21 insertions(+), 19 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 980ce630232c..7de72c610c65 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -578,6 +578,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas DEFINE_WAKE_Q(wake_q); struct mutex_waiter waiter; struct ww_mutex *ww; + unsigned long flags; int ret; if (!use_ww_ctx) @@ -620,7 +621,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas return 0; } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); /* * After waiting to acquire the wait_lock, try again. */ @@ -681,7 +682,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas goto err; } - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); /* Make sure we do wakeups before calling schedule */ if (!wake_q_empty(&wake_q)) { wake_up_q(&wake_q); @@ -707,9 +708,9 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas trace_contention_begin(lock, LCB_F_MUTEX); } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); } - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); acquired: __set_current_state(TASK_RUNNING); @@ -735,7 +736,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); wake_up_q(&wake_q); preempt_enable(); return 0; @@ -745,7 +746,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas __mutex_remove_waiter(lock, &waiter); err_early_kill: trace_contention_end(lock, ret); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); wake_up_q(&wake_q); @@ -916,6 +917,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne struct task_struct *next = NULL; DEFINE_WAKE_Q(wake_q); unsigned long owner; + unsigned long flags; mutex_release(&lock->dep_map, ip); @@ -943,7 +945,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne } preempt_disable(); - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, flags); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { /* get the first entry from the wait-list: */ @@ -960,7 +962,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne if (owner & MUTEX_FLAG_HANDOFF) __mutex_handoff(lock, next); - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, flags); wake_up_q(&wake_q); preempt_enable(); } diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 7189c6631d90..9facc0ddfdd3 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -70,14 +70,14 @@ __ww_mutex_has_waiters(struct mutex *lock) return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS; } -static inline void lock_wait_lock(struct mutex *lock) +static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags) { - raw_spin_lock(&lock->wait_lock); + raw_spin_lock_irqsave(&lock->wait_lock, *flags); } -static inline void unlock_wait_lock(struct mutex *lock) +static inline void unlock_wait_lock(struct mutex *lock, unsigned long *flags) { - raw_spin_unlock(&lock->wait_lock); + raw_spin_unlock_irqrestore(&lock->wait_lock, *flags); } static inline void lockdep_assert_wait_lock_held(struct mutex *lock) @@ -144,14 +144,14 @@ __ww_mutex_has_waiters(struct rt_mutex *lock) return rt_mutex_has_waiters(&lock->rtmutex); } -static inline void lock_wait_lock(struct rt_mutex *lock) +static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags) { - raw_spin_lock(&lock->rtmutex.wait_lock); + raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags); } -static inline void unlock_wait_lock(struct rt_mutex *lock) +static inline void unlock_wait_lock(struct rt_mutex *lock, unsigned long *flags) { - raw_spin_unlock(&lock->rtmutex.wait_lock); + raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, *flags); } static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock) @@ -380,6 +380,7 @@ static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { DEFINE_WAKE_Q(wake_q); + unsigned long flags; ww_mutex_lock_acquired(lock, ctx); @@ -408,10 +409,9 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) * Uh oh, we raced in fastpath, check if any of the waiters need to * die or wound us. */ - lock_wait_lock(&lock->base); + lock_wait_lock(&lock->base, &flags); __ww_mutex_check_waiters(&lock->base, ctx, &wake_q); - unlock_wait_lock(&lock->base); - + unlock_wait_lock(&lock->base, &flags); wake_up_q(&wake_q); } From patchwork Sat Feb 24 00:11:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205728 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913220dyb; Fri, 23 Feb 2024 16:13:00 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCW6++15LrIfe6gknEriR+kkeyaoZJC3WePbgpLIJgmXOP68fDMSlrL5tjZ1uH8Dl8G2Bz3IIQEmrtPa5Gs/SWpo/Z1+Gg== X-Google-Smtp-Source: AGHT+IGO+XNhEs5Xd1RzEvtCApzE45R0ixW1jgJtZr3VwZLSWOWSGz745xHKC8tpJQC6F4+8JHcF X-Received: by 2002:a05:6a00:3246:b0:6e2:ed1d:c5c6 with SMTP id bn6-20020a056a00324600b006e2ed1dc5c6mr1242094pfb.22.1708733580484; Fri, 23 Feb 2024 16:13:00 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733580; cv=pass; d=google.com; s=arc-20160816; b=eTFIYPNrwA7drxbwQlUb2frFTg3Vz3n/coqgxTWlewM+v+ydVLaOUO3z3mw1fxPlq2 ro56Syqsvx3N1+6r2DJS0T+6htqd/7YeMdNHtHiaz4XO8Hod/Y4Vh/TWBAO+wv3leIjI /wVgRo2wU2GHjN+GO1h8wcJE00TZZD4aCuGV59ixe19kosptlJiKPo2VhfhNS6c4Vrz9 O3Ja1TPSdSk+BZYRK42K4HC77pMQ8znwQRQixNB0Zh6a0VrGaWQxei/jKfZEa6YJaU+h apMMY7Z14J7sxcqlzGq3RYY6zWxwmRK6eY3u8Pp3H/Ady6+gx4ZQObaiBSb7YX8fOs0N MvFw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=11JYsWA+tSpUAfCtvYdWCKKR13hGRqHQrhgIeQOqs68=; fh=CAq+MFEKKjJy5jyw+pqNOeDGlDhGAe1gUm7slWeEdnw=; b=vTcZf1g0KJ9Ci4QeLeXDiKWB7aD8OSiFpQGhN4qyZgu+sHwJlc9u+cOcrC/QGOfR0O XjuD8Bvgvme2MqRpFTz1ZLkxw+cnBf7P2O7nr89afJxGZI1JCl7/VYDzI5DfRZQQvpPi 1/uh0IT5bZ4gXOF3YAJw8n3SrGhB+hXJ3gyjmSmN1L7sFFuZFzv5KZW6bNR/OvP9D5N0 6WdffG0ieVZ0Vn9Ui6Tj72N0q2nqzejWCsUsoRzNpc6tYcLyLU8g1vhOfIHsBtWuXHLW vkTUV427A8mb3V2k4wnxKI5sdeqk9qWkVtUL8bTpLcvO1bI+chSL804MroKqFx+lMn1/ FWoQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=o5GKo3Kn; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79346-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79346-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id x29-20020a63171d000000b005e45abc8c8fsi85913pgl.306.2024.02.23.16.13.00 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:13:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79346-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=o5GKo3Kn; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79346-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79346-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 49F8A2886AB for ; Sat, 24 Feb 2024 00:13:00 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ACC6B63AC; Sat, 24 Feb 2024 00:12:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o5GKo3Kn" Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93F5E15D1 for ; Sat, 24 Feb 2024 00:12:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733530; cv=none; b=J4h64J4i40ceD7lG7Bv7DHq2J/KezQ3RBNdY8VX3GE1yTVf4aWdSI80Yw6HGZDtXTLOrSzOGPK8O50i/K5DB70BZD+t0okSPZLBaOnNdTTZmPGlB34epNFk2lCriMvdje76CwGyaCVbVLXJZeNt1EhT9NZ9HVt7R5SFRQGVJnDg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733530; c=relaxed/simple; bh=/MvRuZlBTMH7KRo3ZudNtaXjF43JnT01+5/C5gO6rww=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=agTv9Ir4zCHZ3IX/jrhsIQ4xvNdvtwHcL3po4oT+lblD+YeUkpGzSSLu8N0CIbWKDCB1gQrewO9DBH+BVz2JRd4GEbpmH7I35QMDB4AgqIY2UdrVW+qL08wuCMhPH4FKtk7//QhTO93hxPQGAp1224klhaFpKDN97YFMC7nPcSA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=o5GKo3Kn; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-dc6b2682870so1418633276.0 for ; Fri, 23 Feb 2024 16:12:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733527; x=1709338327; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=11JYsWA+tSpUAfCtvYdWCKKR13hGRqHQrhgIeQOqs68=; b=o5GKo3KnZmd4oulgoa+pgjdraWMGFGmnRQJKlU9bWKx3sy63NOQr4GIbobA5WhXJhI Koq6L/GX8cAXLZOKA1chOECue+3k7phXYBZH2Y7UBOWAXqV7rbkQbUCRTx9Zjwqv4nqu 6lZZeX+PcpwQXiwjRxjAcn6Th+Q2UJ9lBNdCK67YYEn8RMymu2IHzWqxtLVDsSFN1SVi H9yN6IK4iAI3QtIEkYwAJ5QliWIcmE/OL2BWwVFW257O05xd0NehRFn1iA9K+QclWih1 sBnqxVrL95NctFzVIGqPs7ewhy5tcebGnd9gWtT2boQnZSuDmyukODwEuHewxOkk0Ztn bO2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733527; x=1709338327; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=11JYsWA+tSpUAfCtvYdWCKKR13hGRqHQrhgIeQOqs68=; b=qGgEc9dpP567Zu5tRBoF69LGFXa6j/g7k6Vm8GIa73kURzag/zuPyt1NLmK/pWE/4K oX7c7Fwpt/dne5uUsIgjYlSoAiO89ox4Fh7FC8KS7BD1rWRZ6Br1/dFbowR4VTd1XUYs in7tTGlO058XahMddIAJJNxAUYljzodlAML72FK57glOTU2jM2vbHpemhe/tXsPBeXbC hj1vtk4yyxmFFKM+h8nK6X2UCzz892kGW9IGFlSkKl0+9MmFcUEyCFMY0P7WyT6ACq2U tvyxuYcwO4elZWxTc+hK3nUtm9JonwG4TsIP1hZmZO31uT39+9jB/oaNXtZgsLaJb/ta qcZQ== X-Gm-Message-State: AOJu0Yyk+0/wHx3cnrzYinQSRhJuuaj1iIapwoVHNtVbs6VNSeT2VSXP 40136rQg3bZ3bipnyzp+6TPDW0On74NPVmxhtG56O0NXrzky7aOKp/svsn/eLjDzIWFtSVSfHLE ZP5gfMQnkttqdbBU0ErAYtdl/Vm4OhuukiA1BZCBJ1pZJleOK8RecVKHs7q4igwBcVcckJEme8H Q3Dk3bwJKihVeZHRKqXxCoLL6PwN3iGmXXmc5BtwKYbs5m X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6902:1003:b0:dc6:e5e9:f3af with SMTP id w3-20020a056902100300b00dc6e5e9f3afmr365170ybt.9.1708733527490; Fri, 23 Feb 2024 16:12:07 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:43 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-4-jstultz@google.com> Subject: [RESEND][PATCH v8 3/7] locking/mutex: Expose __mutex_owner() From: John Stultz To: LKML Cc: Juri Lelli , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, Valentin Schneider , "Connor O'Brien" , John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737022826457055 X-GMAIL-MSGID: 1791737022826457055 From: Juri Lelli Implementing proxy execution requires that scheduler code be able to identify the current owner of a mutex. Expose __mutex_owner() for this purpose (alone!). Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Juri Lelli [Removed the EXPORT_SYMBOL] Signed-off-by: Valentin Schneider Signed-off-by: Connor O'Brien [jstultz: Reworked per Peter's suggestions] Signed-off-by: John Stultz --- v4: * Move __mutex_owner() to kernel/locking/mutex.h instead of adding a new globally available accessor function to keep the exposure of this low, along with keeping it an inline function, as suggested by PeterZ --- kernel/locking/mutex.c | 25 ------------------------- kernel/locking/mutex.h | 25 +++++++++++++++++++++++++ 2 files changed, 25 insertions(+), 25 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 7de72c610c65..5741641be914 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -56,31 +56,6 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key) } EXPORT_SYMBOL(__mutex_init); -/* - * @owner: contains: 'struct task_struct *' to the current lock owner, - * NULL means not owned. Since task_struct pointers are aligned at - * at least L1_CACHE_BYTES, we have low bits to store extra state. - * - * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup. - * Bit1 indicates unlock needs to hand the lock to the top-waiter - * Bit2 indicates handoff has been done and we're waiting for pickup. - */ -#define MUTEX_FLAG_WAITERS 0x01 -#define MUTEX_FLAG_HANDOFF 0x02 -#define MUTEX_FLAG_PICKUP 0x04 - -#define MUTEX_FLAGS 0x07 - -/* - * Internal helper function; C doesn't allow us to hide it :/ - * - * DO NOT USE (outside of mutex code). - */ -static inline struct task_struct *__mutex_owner(struct mutex *lock) -{ - return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS); -} - static inline struct task_struct *__owner_task(unsigned long owner) { return (struct task_struct *)(owner & ~MUTEX_FLAGS); diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 0b2a79c4013b..1c7d3d32def8 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -20,6 +20,31 @@ struct mutex_waiter { #endif }; +/* + * @owner: contains: 'struct task_struct *' to the current lock owner, + * NULL means not owned. Since task_struct pointers are aligned at + * at least L1_CACHE_BYTES, we have low bits to store extra state. + * + * Bit0 indicates a non-empty waiter list; unlock must issue a wakeup. + * Bit1 indicates unlock needs to hand the lock to the top-waiter + * Bit2 indicates handoff has been done and we're waiting for pickup. + */ +#define MUTEX_FLAG_WAITERS 0x01 +#define MUTEX_FLAG_HANDOFF 0x02 +#define MUTEX_FLAG_PICKUP 0x04 + +#define MUTEX_FLAGS 0x07 + +/* + * Internal helper function; C doesn't allow us to hide it :/ + * + * DO NOT USE (outside of mutex & scheduler code). + */ +static inline struct task_struct *__mutex_owner(struct mutex *lock) +{ + return (struct task_struct *)(atomic_long_read(&lock->owner) & ~MUTEX_FLAGS); +} + #ifdef CONFIG_DEBUG_MUTEXES extern void debug_mutex_lock_common(struct mutex *lock, struct mutex_waiter *waiter); From patchwork Sat Feb 24 00:11:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205729 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913338dyb; Fri, 23 Feb 2024 16:13:15 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVAAbH0C79Lf4mxli7j6DI/XMapGimbWLvgeVCaTKYmspczSRlG569RdqY93qnjYrcRABcy8SloXKFjj2Vex89TGb94Nw== X-Google-Smtp-Source: AGHT+IEg5ZermfcsFSyc2lqyXGMUzO7q7HATKY7KPdgVg88/QQ4pf4KMYqkna6T+9eqwpXAuOsYa X-Received: by 2002:a05:6358:1220:b0:178:ba22:3cfd with SMTP id h32-20020a056358122000b00178ba223cfdmr1653456rwi.17.1708733595185; Fri, 23 Feb 2024 16:13:15 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733595; cv=pass; d=google.com; s=arc-20160816; b=r97HhmeYgTl5THAaTNpU9UcWJGwpdMgl1989+rIdXrA56rm9PiH+gX0ye1A+FewcsQ 0svfkMg2ETOOz56H04EXxxhmkWoDBWg4PxLIjWzO14Jv9vR5+2tbNvmKSnyol18H0QTX rggdERsWCV9zOUXwSxmih/SLZfIjmaysIsKRhzsD9KLjjIJqh6meKGN3cuBoOcJJuMHR o9xxIerXU7IIRMyZJKH7Etna5AMlRIfhX/93Q7SAks3jTbjbVlAwgQCvSLgUfOMkv6HV 17y3h4xA763BKny4k0IN5DhEAlgVimBwWiRuvyenPIwjhDaVEG4GaGBADM+tyGah9hS7 Xdqw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=esOVP+N/ZUoa4ygSfd761G7owoCk2ZEM+BYq0871ztg=; fh=OA4B84zChEo2N2zZSnRgF4ImSezDLuvwskDfqEiOYJk=; b=FYHPnOSRRJGHO2n5gBWkRIPNalE9JZy0ApIWN82Nmi7F3J0ClwN+Fo+0tkWz+hdsgR CiUKkxOrBquOU9m/n4KcYPN/1qS0tWPgO0v/oVvvUIjZvczVl0wKYZlgOet/STTqCNWG 4fSZfrcXiMgnXYURAmwcyLDVZqBwjoPBy/ylE9O57TsuPDMquAXnrLOg7MWhlOuHYUOM 4wLuDWm6JyyCzWbZoC3BwbA2ZYjPFsb+Kig4DEoSlEFtmcKoedRaaytX7J+8yMDRwaXe XJt17EzDz1dZITlJfMg9MTYCtbyiPOKmrtLRzwGqVVPkJW3qysT9Cvp1/PFkBjQ0oS2E gmKA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=e12NFPTZ; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79347-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79347-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id h10-20020a63f90a000000b005e2b17cd8a0si81811pgi.270.2024.02.23.16.13.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:13:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79347-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=e12NFPTZ; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79347-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79347-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id EDA632886D2 for ; Sat, 24 Feb 2024 00:13:14 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D0DAB79F4; Sat, 24 Feb 2024 00:12:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="e12NFPTZ" Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2E702F30 for ; Sat, 24 Feb 2024 00:12:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733532; cv=none; b=m3W/FLrm0+/wCUM5c3Psta/IhQm78wqPhMvkiLjK/lCun9XBmqzwlXOLeGduzCIe6JWCvKaC/4X1AyheHTgXG+eGB04+mEuRk6WkESN9auMig/QFe8O2OTGnH5LFrVeDYxf4TqcFgTmiDRsGK1zCWdT4Lq/FjmvdxvjgyJpusQM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733532; c=relaxed/simple; bh=j21O/bff5NFZS6pprTcMOjKvrJL6woNmikyTOJZD1do=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gAnOVVIv+4CSgxZz8Ucwl0jVwMOWMadIzog9q2l4krviotKr7OrV8VHLjlaZxspG601KSywivhTG3pNaf8s+sp8T6Xbc3WCDCGyj+G4M76uRmwq/TsbtWWPyPPoXa/p3ubDXi2lvcGnfOOED+C5p6ZsqVaXIhyvvhV5UpXJdxpU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=e12NFPTZ; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-dc743cc50a6so2065556276.2 for ; Fri, 23 Feb 2024 16:12:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733530; x=1709338330; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=esOVP+N/ZUoa4ygSfd761G7owoCk2ZEM+BYq0871ztg=; b=e12NFPTZs7P5alpsY7e+bwP5K1EuXFwtN5a2mrVYIvo8V24saknyERmmIdXUJ6jr69 O+K/MwMblP+obX8E4GERyqLjXTv2sMKTu4dRCO7c2/owKDs/CuWBdVBcrO1asKge1bn9 MGwD7/l2I6bRYbGVaTbQCrVgkxg3ppWRIpnPnpXP0FQ0UD0wgQr9f17pDy0rcqfeXDhF C3Oe3IvYCounJ9/WJk0a9HJ4whyxnJLAXIwH6UBahQrEP3SVPXbJxTHLswI1ZyFb1GJN cD8Leel/IW0AQp4wfmD0PN7IJCQuGifAlH7qJzh2TpQzAlOPpVS/QswRNp5VDA+NdQt3 1eJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733530; x=1709338330; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=esOVP+N/ZUoa4ygSfd761G7owoCk2ZEM+BYq0871ztg=; b=LK9FYsZLyI5kULP99lA2QQZjlzqKcHlwAST4EzA/LYzyxlBUWJPm6dxKEyHUdNBaZJ ZCZ/192hAjnTdxARjkscp/TKQLn1kI1jCspt+5W2MwoWHAFcNrk45Gaok7OWIeb3uqBG VOB5DAtiIslUfQeGhME6pBn6vyzsDnnAuzQ3qOeksy9GKaQBt8gIoZmVqBnfTJ0YhbB2 EJllCg9VZskIo65C2ExBesdzZJ4PhLqa20z91Ug3qMpb5hpNy80/i6V8kU56a3D/tplN nLt2P7Ap2EZSTfrjZM2ZFqliXW4QvNikgTuYpl7iaySLXIQCFQT5aunPkti2GXvZEEzb zzPA== X-Gm-Message-State: AOJu0Yw+9UugLvoFeAoUVyAeu9twj4cwdQKCsVzgl8Z63g/aXM2wtI/m U5V2ncxWLh6I/BK1E7YvBTdmKgaMl/y2Re7XJRVkUbtpSeWrnKRSHF7IJ5Ma8Vfj/BFg938Jv6i 2uLrAD1PilU+N4NBtMIF+1tPXkTDKZ/RKMZCSSfJt7clAQYHeZJHXBTzY9Xd6Ubbtv0h0wOXGvx 5UUsbxsUygNM7a9HXZYWnVh6WgF8xcAjvDIm+LsBPJswyg X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a25:c7cd:0:b0:dc7:865b:22c6 with SMTP id w196-20020a25c7cd000000b00dc7865b22c6mr63367ybe.8.1708733529517; Fri, 23 Feb 2024 16:12:09 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:44 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-5-jstultz@google.com> Subject: [RESEND][PATCH v8 4/7] sched: Add do_push_task helper From: John Stultz To: LKML Cc: "Connor O'Brien" , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737038549833419 X-GMAIL-MSGID: 1791737038549833419 From: Connor O'Brien Switch logic that deactivates, sets the task cpu, and reactivates a task on a different rq to use a helper that will be later extended to push entire blocked task chains. This patch was broken out from a larger chain migration patch originally by Connor O'Brien. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Connor O'Brien [jstultz: split out from larger chain migration patch] Signed-off-by: John Stultz --- v8: * Renamed from push_task_chain to do_push_task so it makes more sense without proxy-execution --- kernel/sched/core.c | 4 +--- kernel/sched/deadline.c | 8 ++------ kernel/sched/rt.c | 8 ++------ kernel/sched/sched.h | 9 +++++++++ 4 files changed, 14 insertions(+), 15 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 9116bcc90346..ad4748327651 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2714,9 +2714,7 @@ int push_cpu_stop(void *arg) // XXX validate p is still the highest prio task if (task_rq(p) == rq) { - deactivate_task(rq, p, 0); - set_task_cpu(p, lowest_rq->cpu); - activate_task(lowest_rq, p, 0); + do_push_task(rq, lowest_rq, p); resched_curr(lowest_rq); } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index a04a436af8cc..e68d88963e89 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2443,9 +2443,7 @@ static int push_dl_task(struct rq *rq) goto retry; } - deactivate_task(rq, next_task, 0); - set_task_cpu(next_task, later_rq->cpu); - activate_task(later_rq, next_task, 0); + do_push_task(rq, later_rq, next_task); ret = 1; resched_curr(later_rq); @@ -2531,9 +2529,7 @@ static void pull_dl_task(struct rq *this_rq) if (is_migration_disabled(p)) { push_task = get_push_task(src_rq); } else { - deactivate_task(src_rq, p, 0); - set_task_cpu(p, this_cpu); - activate_task(this_rq, p, 0); + do_push_task(src_rq, this_rq, p); dmin = p->dl.deadline; resched = true; } diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 3261b067b67e..dd072d11cc02 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -2106,9 +2106,7 @@ static int push_rt_task(struct rq *rq, bool pull) goto retry; } - deactivate_task(rq, next_task, 0); - set_task_cpu(next_task, lowest_rq->cpu); - activate_task(lowest_rq, next_task, 0); + do_push_task(rq, lowest_rq, next_task); resched_curr(lowest_rq); ret = 1; @@ -2379,9 +2377,7 @@ static void pull_rt_task(struct rq *this_rq) if (is_migration_disabled(p)) { push_task = get_push_task(src_rq); } else { - deactivate_task(src_rq, p, 0); - set_task_cpu(p, this_cpu); - activate_task(this_rq, p, 0); + do_push_task(src_rq, this_rq, p); resched = true; } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 001fe047bd5d..6ca83837e0f4 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3472,5 +3472,14 @@ static inline void init_sched_mm_cid(struct task_struct *t) { } extern u64 avg_vruntime(struct cfs_rq *cfs_rq); extern int entity_eligible(struct cfs_rq *cfs_rq, struct sched_entity *se); +#ifdef CONFIG_SMP +static inline +void do_push_task(struct rq *rq, struct rq *dst_rq, struct task_struct *task) +{ + deactivate_task(rq, task, 0); + set_task_cpu(task, dst_rq->cpu); + activate_task(dst_rq, task, 0); +} +#endif #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Sat Feb 24 00:11:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205730 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913422dyb; Fri, 23 Feb 2024 16:13:29 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVPaCToSMCoWF1QfOHRepUggkTVpMX929tkNcuA9FZx7hQM9wIBVdn1VwVy6RYZQHXKq4e0fbEqnIhtmQtdYLO43I4GSA== X-Google-Smtp-Source: AGHT+IG4vaxncOzsue1FHBahNKyTwosAY9nDT/TBWwNw4t40sENwrjb4sq8OTYmYhHDygfWGWq// X-Received: by 2002:a17:907:940b:b0:a42:e720:58af with SMTP id dk11-20020a170907940b00b00a42e72058afmr218362ejc.4.1708733608949; Fri, 23 Feb 2024 16:13:28 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733608; cv=pass; d=google.com; s=arc-20160816; b=lW8WCrxi8kf78M7j/4DSETDqxex+a21hybMGaDHRsDuEhQLqu0U5+jaMn3JUYGDxVP 5eClPMi+awbD1slqvfsQgxdWydALEBa6NcmyQvBGSpuUfFa23rYwDKbCHSJ4XGGIlEyr uqa35ccZ6Xa2eI8KgJZXnFzTR3M0z4V0sD2eJ+2eoWZIpiMlF5dysHDKyzHE4wAFJNtJ iza9TMNxp98dELLEHXY/OVtz7Fu3e8Ti+Hi419rNaz443oEB6ym50cbexa+MtHgqrz+f +JM/f626x2ON252yfgDcNy6dGTcHISb0t7KmgwrYrMBIXToUf8/MMdRoPZD0mqQN2Jrx j1Qg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=HisRLPDUotN/MLFzIic3Nyb03cfZKjp8nhvpXhlORJs=; fh=OA4B84zChEo2N2zZSnRgF4ImSezDLuvwskDfqEiOYJk=; b=k/ndlUM+jit5lVlNzq7RwqWCFszvRFN7XvbNQsE9ZJH7jm0bPQrTbv+9fldckUw1OG 5vrl9VU6QeTN0iKgqeRSo2tPARzJVENWH/vovQFhOCH63twAgzwpcKrBikht9ykc0/r0 j89Mklu1cLOf3wbgGakmMlQ/oHs9ESmS1hsIvCWbN7JEOYhh4QsIzwqAnbSoPXOgvo+g 9RQj2SXd714EdIfT2wVob1Darqg4l6MCk9jtuFV/wmCfBya/hxqwAdn+hU+mYyiDJV7t PzUH6hK0HW1XKY7v+UeTAGJnndzxa95nWvUiC5q0Lt3cOwXff2sWA7VpG76uOlTb03o4 P6KA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=AUqSu8X1; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79348-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79348-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id c12-20020a170906528c00b00a3f45198767si61250ejm.229.2024.02.23.16.13.28 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:13:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79348-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=AUqSu8X1; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79348-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79348-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 66A0F1F25D3C for ; Sat, 24 Feb 2024 00:13:28 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B7943A951; Sat, 24 Feb 2024 00:12:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AUqSu8X1" Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EBBC64C85 for ; Sat, 24 Feb 2024 00:12:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733534; cv=none; b=TF1ZLMqLiR9IcfErflCqKF75Pa8cIcQMFw5UuQfVQA7nEMc6hYPuNrwvHQALeoGBR/BXWo9MJF9HLgeNfXjLFB6EPig7V8JB1g75PuMzvPTUGhv15Ho7D9nOUuaFmK07tOlFl85uLpFzXMDJQsI76lISZF3dfDKJ+AfFFVkDVx0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733534; c=relaxed/simple; bh=1hM1AvrYZkVVQz6jWkOukGk7b+ida3TUYaFajDoq9/U=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gNsIZGAvxaq39W0S0Lvs1hfRL1hk6oSSFhjitsf7wruPezLB/lcNEBPriAqkSSc7syXnXFr0aPsHp7fTw89wWQMYR2AKe9UvSRiZi4o/nRgvbnjPqXaSRXwZ5v5shmjbdC5glZ4XIdEdcJwSqvD6AKDUfBlY4Ul2ip2gyoUayR0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AUqSu8X1; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-6e10d9dee93so935641b3a.2 for ; Fri, 23 Feb 2024 16:12:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733532; x=1709338332; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HisRLPDUotN/MLFzIic3Nyb03cfZKjp8nhvpXhlORJs=; b=AUqSu8X1JmEa36EdAa9lo9ib1+k74HjFibs6cU703bPXSXU6hMXhHn4eK7+fUax28r gdRF/isqmnnEIWwaY11X3k7fCf4BdyhsBk4mLL+1PIJwATUgbawI5qd2b4VQsfARc7Zr Z72BeTquOiym5l6cdgYdo6pmJ7uf9DGGMZKEkWKqpU2uTQ2dX0ur7cMm8c4PtwzN7np7 5Xc0c5hiTevmIBfxFxEteFBFhTywpntwy45HFk1nUdEWfR4Yj2c9Uu3sxRSOPr3FDttE g7pnN8hCEQOmo6hQ6+YuG+j1e63VllntwZtH1Cix+oxcx/4XIRm4cmz00v1SCQbIOQUG X2Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733532; x=1709338332; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HisRLPDUotN/MLFzIic3Nyb03cfZKjp8nhvpXhlORJs=; b=IFOEyggLTUCqPSR7QYODIN9upHFbkx7N5vYKQgSxClTAQjUP4kKwUPyZixUDZT0Bew kVWwAkY9uI8c8X1lPJE4HWFUKeDgN9jFiHmuYswZIsnUThW+jOavyYB1AzlTimt6TY2V dcRomnVrBbZnQ08+vuPqHe0CW8bbSxZUdl5pcDw8pqVJQJ3cjujgkrmK7Ji21Zju9iyq vW1MPPdxW6JIfvmCpB+tAvjJqHdtAPCT0F3rZvjNb/2W68lVA6vTSmv0BCLySS1GL3ys D/70dWmKfEk2P6SASg+mxuzu8QuzMa5BkHnt/FO9r0VoA/PxnCn99dNgw0QBFXNld3fe Jx8g== X-Gm-Message-State: AOJu0Yy8/YGzLn947ceAAKzwrSWl2Vd4G10ZNm8K6xe4awKYpysOXJi5 NeNChDd8wJluN6z/O/4g0bpqnUHPtam7B1jNuMH45G09GkBcYWAm9NXExwznt+siMrzNpS90gxE zInqw41tFdFikk+zlElhfCjdh8Mi/UH2b2o9rNCXQgSkFuC12QpVLXtB4Ak0GdQHZN2UtgnGVnN KYjwvVDWKaXOIWa2hQDcTELKhGiAR4xbVMoBvITKcQk2+H X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:80f:b0:6e3:de2b:64e0 with SMTP id m15-20020a056a00080f00b006e3de2b64e0mr70503pfk.2.1708733531235; Fri, 23 Feb 2024 16:12:11 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:45 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-6-jstultz@google.com> Subject: [RESEND][PATCH v8 5/7] sched: Consolidate pick_*_task to task_is_pushable helper From: John Stultz To: LKML Cc: "Connor O'Brien" , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737052319110791 X-GMAIL-MSGID: 1791737052319110791 From: Connor O'Brien This patch consolidates rt and deadline pick_*_task functions to a task_is_pushable() helper This patch was broken out from a larger chain migration patch originally by Connor O'Brien. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Connor O'Brien [jstultz: split out from larger chain migration patch, renamed helper function] Signed-off-by: John Stultz --- v7: * Split from chain migration patch * Renamed function --- kernel/sched/deadline.c | 10 +--------- kernel/sched/rt.c | 11 +---------- kernel/sched/sched.h | 10 ++++++++++ 3 files changed, 12 insertions(+), 19 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index e68d88963e89..1b9cdb507498 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2179,14 +2179,6 @@ static void task_fork_dl(struct task_struct *p) /* Only try algorithms three times */ #define DL_MAX_TRIES 3 -static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu) -{ - if (!task_on_cpu(rq, p) && - cpumask_test_cpu(cpu, &p->cpus_mask)) - return 1; - return 0; -} - /* * Return the earliest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise: @@ -2205,7 +2197,7 @@ static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu if (next_node) { p = __node_2_pdl(next_node); - if (pick_dl_task(rq, p, cpu)) + if (task_is_pushable(rq, p, cpu) == 1) return p; next_node = rb_next(next_node); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index dd072d11cc02..638e7c158ae4 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1791,15 +1791,6 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p) /* Only try algorithms three times */ #define RT_MAX_TRIES 3 -static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu) -{ - if (!task_on_cpu(rq, p) && - cpumask_test_cpu(cpu, &p->cpus_mask)) - return 1; - - return 0; -} - /* * Return the highest pushable rq's task, which is suitable to be executed * on the CPU, NULL otherwise @@ -1813,7 +1804,7 @@ static struct task_struct *pick_highest_pushable_task(struct rq *rq, int cpu) return NULL; plist_for_each_entry(p, head, pushable_tasks) { - if (pick_rt_task(rq, p, cpu)) + if (task_is_pushable(rq, p, cpu) == 1) return p; } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 6ca83837e0f4..c83e5e0672dc 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3480,6 +3480,16 @@ void do_push_task(struct rq *rq, struct rq *dst_rq, struct task_struct *task) set_task_cpu(task, dst_rq->cpu); activate_task(dst_rq, task, 0); } + +static inline +int task_is_pushable(struct rq *rq, struct task_struct *p, int cpu) +{ + if (!task_on_cpu(rq, p) && + cpumask_test_cpu(cpu, &p->cpus_mask)) + return 1; + + return 0; +} #endif #endif /* _KERNEL_SCHED_SCHED_H */ From patchwork Sat Feb 24 00:11:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205732 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913750dyb; Fri, 23 Feb 2024 16:14:19 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCV+7pAz/Wmrc7da+5LAZsiYJZm5mr2BpjD84mBDmh/L/yHB2vWxInB/goiU6/g3LZRJ+4KEToGZqH0lP6lZ7PUlLC4cVw== X-Google-Smtp-Source: AGHT+IFkYvqw09TfLPqJkotixfqPt4K4SEq+j7kzDjHA2Oq5Hhd4UTsaAD4t6r0q2S2mlmsv5o8j X-Received: by 2002:a05:6a20:d805:b0:1a0:e1b4:3f5b with SMTP id iv5-20020a056a20d80500b001a0e1b43f5bmr2042745pzb.40.1708733659797; Fri, 23 Feb 2024 16:14:19 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733659; cv=pass; d=google.com; s=arc-20160816; b=OztJb1HWS0MZxcStoTQzsglefIsv1Oj8OSZw+mijLo+YjqkkmmvfD/ZBBX+HtMiy0I JPj4SMLXf8mTOp7rkFbK86ue9JIwh5LmeiXikWDS08eMA76KjsEAPk8kdONZt6T1b45l njqhnd52JcieuEX2AcY9lqhxKDEYabpx8w3CKBYOPIEE46YtX3e7Npa9yWAGCTW2Ksbt xzwI0H5nLIyNYHMeQaEJ/vgov84ctcIpdEH3I1T+x6I5gUf3BJhwTQs+nPaSv3JBUR6z 2Pgkj23UfGKa4ZW8fRBUjtbt/e7ggs6YVrViD+GCZbjUVhu7l4Uir9BDb2LKKKpO5pZ2 obCQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=xHlNRQX7p0VNcoeOcU22X7qC0ty7JjpX7n4N9uoKw40=; fh=lq1k4m46TqOToIb7rHC/F2KmHTJTjmaV9sS3Am9W7nA=; b=esvcT3jjd8ctsAjUf6yh92rjH6tQUddkpjDdCQiyIcxt57ajd7sQ1IsXGzNhIovgWz 7zss2wfZKxz9S1Ulj+3buxLiXHd0VaIBqGka/SlRnshahm8Sl6Q6gygCRrCfCapf4IKl IcNUWKuehkQaR6GSegsHdezbi1naW6reBiVRaLeI8jnUOc9RARy4uYvvcXmxim1B84Y4 wbrPI+3DAQZoroVWmJqUBQ0C081uh6sF4kakjGuHBcZ7ZwCuK9l90pojUIFJ1bR8inzM Ca4cMiJl0OnF8oQBlSMUYm05otXomRx4Jdy8eMeySeTemjQDLxsfniF8gKn5y7cQ0WW3 cvtg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=vq3YGeR9; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79349-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79349-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id p21-20020a635b15000000b005dc35974c29si85083pgb.424.2024.02.23.16.14.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:14:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79349-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=vq3YGeR9; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79349-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79349-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 07DF2B21C81 for ; Sat, 24 Feb 2024 00:13:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B9CF4BA5E; Sat, 24 Feb 2024 00:12:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vq3YGeR9" Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03C3F4A06 for ; Sat, 24 Feb 2024 00:12:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733536; cv=none; b=pTEym9mDIgkjDZ81poR3/RyzEbwZiTpATGHEtVW5DZHgaqrRijbIR7ywvpQjqK19eJRaztrdglP0sK9Bb7a+6sm4LWTBF4I3+OLneqrO1ddPUXWOwaGZC/jW86BwwvAzNE8QoTeYgExi+dyM2fJi+3ksPomdFkoFsM1DPbMNDI8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733536; c=relaxed/simple; bh=nKZAtl5osJTvY39LprO7HB4iiScYjpHjj/s/9JAi5Qw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VnC3GTHEmBSuNwKyCPyGnmz+/7v9LeT1GE4SoDq7+M0NDNLo4lURRbUumkZYz1wcSWbKOsjn+P/S57hjOHU0z80NOiRDEIKY3nrtkYs5hTjoVNmLItOkzxTxGRjYRs7aPeTWU4wPaHoU8dFa0UvrUijYW9QjmfryaOo6oht9l4I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vq3YGeR9; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-6da57e2d2b9so1236343b3a.2 for ; Fri, 23 Feb 2024 16:12:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733534; x=1709338334; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xHlNRQX7p0VNcoeOcU22X7qC0ty7JjpX7n4N9uoKw40=; b=vq3YGeR921/1Q7pRxX9PJqIbsj9Zph/bNDaB2CwYZBGsz4DBoEHa46iA8K+vHCgzwb R6z/XP/p2ds08gGmvWYaCA2GikdJYDISJ+c59uQmVd/8gNcA7fS6iH78c8zAIoHIZCoi hgTYdKpDMq2T9CfvXTAQRjm6PqwnAWdOJ3V683w6Zg40Crvtyvu45plxtHRFiWjQhI0p UYPfWu21ViFvdDr1DIzIylCkbmockq5GeDMfNOzP9uxm6L2Y2gMbSOFwNrSsJTfIwbBb 3bxGTNdvNHKvDVDC7BA2FxKKQn2exy3BJ/jO1/FuSKT2eUB/BSibHf3WsmajrfvFoUv0 0/JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733534; x=1709338334; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xHlNRQX7p0VNcoeOcU22X7qC0ty7JjpX7n4N9uoKw40=; b=sEu7CZLFbUNouXF/xOjNF1/2mefkW48YhKkHhR3u6JOCvcpgZ+dwxbFU636GPxmXBW UqeFBOtX9eQRcT7WNYzWS51k94M6kovrbsnOqqS56kYxxmDd1b4v93TC8sfNXbIf2F1k f1wff7rjWusrk5+QIIQ4N/kiLbE61BRWsjQnEDE3zZ99m1W/jEZ3fFIpdcoxvQ9ayB1B gS/3mMOx7sPncBN7RRba0oArYMhVvRg+WI+mgr0y9L+e5YCWc0xHKPjET+uSeEGOIEuW xBKFXkTWZaACwpwtw5BIbZT5qFhBdPza1ebytKuebW5F00sSKbgDDPYTDJySqdstWkhZ pz2g== X-Gm-Message-State: AOJu0Yy+aZzsbtFi0h7Ca7noXI4x4j7/i2w7SE8rqFLhBjFMw4Gg7Miw hXzTWPK4vcUvfln7IlIqB/v7BRUjby/i1FQxqJCSAwgRs6JO1pPREsyUpF0+9mWZuceVbjkQV/N la8C8aOg2/AJs23IB6d64dpnKmBm+280y0JPy+b4BlC8Gg7mXOG6P1lJHeIEW42sO8jfVOvJjFK m1voCi8+go0wziJP4NktyfABT6KKF9lHQpAsW5rCW2Ejpw X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:1951:b0:6e4:7af7:910a with SMTP id s17-20020a056a00195100b006e47af7910amr77025pfk.3.1708733532991; Fri, 23 Feb 2024 16:12:12 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:46 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-7-jstultz@google.com> Subject: [RESEND][PATCH v8 6/7] sched: Split out __schedule() deactivate task logic into a helper From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737106284493127 X-GMAIL-MSGID: 1791737106284493127 As we're going to re-use the deactivation logic, split it into a helper. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: John Stultz --- v6: * Define function as static to avoid "no previous prototype" warnings as Reported-by: kernel test robot v7: * Rename state task_state to be more clear, as suggested by Metin Kaya --- kernel/sched/core.c | 72 +++++++++++++++++++++++++++------------------ 1 file changed, 43 insertions(+), 29 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index ad4748327651..b537e5f501ea 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6563,6 +6563,48 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) # define SM_MASK_PREEMPT SM_PREEMPT #endif +/* + * Helper function for __schedule() + * + * If a task does not have signals pending, deactivate it and return true + * Otherwise marks the task's __state as RUNNING and returns false + */ +static bool try_to_deactivate_task(struct rq *rq, struct task_struct *p, + unsigned long task_state) +{ + if (signal_pending_state(task_state, p)) { + WRITE_ONCE(p->__state, TASK_RUNNING); + } else { + p->sched_contributes_to_load = + (task_state & TASK_UNINTERRUPTIBLE) && + !(task_state & TASK_NOLOAD) && + !(task_state & TASK_FROZEN); + + if (p->sched_contributes_to_load) + rq->nr_uninterruptible++; + + /* + * __schedule() ttwu() + * prev_state = prev->state; if (p->on_rq && ...) + * if (prev_state) goto out; + * p->on_rq = 0; smp_acquire__after_ctrl_dep(); + * p->state = TASK_WAKING + * + * Where __schedule() and ttwu() have matching control dependencies. + * + * After this, schedule() must not care about p->state any more. + */ + deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); + + if (p->in_iowait) { + atomic_inc(&rq->nr_iowait); + delayacct_blkio_start(); + } + return true; + } + return false; +} + /* * __schedule() is the main scheduler function. * @@ -6654,35 +6696,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) */ prev_state = READ_ONCE(prev->__state); if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) { - if (signal_pending_state(prev_state, prev)) { - WRITE_ONCE(prev->__state, TASK_RUNNING); - } else { - prev->sched_contributes_to_load = - (prev_state & TASK_UNINTERRUPTIBLE) && - !(prev_state & TASK_NOLOAD) && - !(prev_state & TASK_FROZEN); - - if (prev->sched_contributes_to_load) - rq->nr_uninterruptible++; - - /* - * __schedule() ttwu() - * prev_state = prev->state; if (p->on_rq && ...) - * if (prev_state) goto out; - * p->on_rq = 0; smp_acquire__after_ctrl_dep(); - * p->state = TASK_WAKING - * - * Where __schedule() and ttwu() have matching control dependencies. - * - * After this, schedule() must not care about p->state any more. - */ - deactivate_task(rq, prev, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK); - - if (prev->in_iowait) { - atomic_inc(&rq->nr_iowait); - delayacct_blkio_start(); - } - } + try_to_deactivate_task(rq, prev, prev_state); switch_count = &prev->nvcsw; } From patchwork Sat Feb 24 00:11:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Stultz X-Patchwork-Id: 205731 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:a81b:b0:108:e6aa:91d0 with SMTP id bq27csp913611dyb; Fri, 23 Feb 2024 16:13:57 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX/K+ZHSH5svBWsSaCJ+fFRLHBb96ElyyQA5OejK26K13PPF/AWb2QdW+0yDWneXeVraDGDvqQbMOqC8W83/L6a1qoncQ== X-Google-Smtp-Source: AGHT+IGPWRd2zEtXnoKV91vFQBmo+gSplBZ1VPSkq3/v4b+wf/AXXDu1R8ObZL/lxyWYe5BAAyEB X-Received: by 2002:a0c:e394:0:b0:68f:25ab:3184 with SMTP id a20-20020a0ce394000000b0068f25ab3184mr1334313qvl.22.1708733637205; Fri, 23 Feb 2024 16:13:57 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708733637; cv=pass; d=google.com; s=arc-20160816; b=z4Ns867w0vFjcLfyzdP/WEUFYvSSngzb6Lt6KH/Ld8LEMEaiIpa71iIr8zMvJR465D q21lzkxPveUNlzhHpLKGLr42lIs8xQqEH9oBGdXuA7CJdCXqY4PrMF34phpTLmuXBuns zznHaJRKxcydfS75lJ7TBR/gdJmQZxBblvZ9mnHkT6kcP6GOi2lkbBbAAOn22ufpT8+7 XnbrFaHuJGHSlyqQ2XfjkwxplxlgaFGObMLGJinbYBvmwd80WvBWmmVL/9KxvhUE3oi8 qin47uXScCOySAQUXFYniY0W0A0R2aWnz1uxajaD8tJGWt3biMza2Dvl/0w+zLRNSHCv av3w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=HLDw6CquW3kHDltZofc4BI/Dj8EvwP77wlptMXHUSkI=; fh=tSAeEtZsh2QOUn0LJdH7BkbwtCdV6u7UFf67ON7qARU=; b=XNBCFc8WiBud7M9zWSuWcehw6pH6CJQkUpi2IiBcugIPNWpYN414PsXcjyNCMJkGZJ 4Q0hZHZaMxdAjbu/hQzQJazWdPZPwSaY2SOF4aJkGeBSvoH4SkHMFxDq4DgpdEfXhOJm XMA6XuvfhY66WFV9ZSayNW0UQUetaR9Hi5jp2h0+SMGUqKoMPj6agIY3kBDHYWmJgVNp 4m4uw8N7yzqGsjzHQengjVUjnRA6ebyajKnrg5YYfif6pTWmyTs7jRptILjcp3d4Ahrk uJweQ6kBbTkZ01TWVCnp+wxNU4xNcgNAteZi8o9xZm59/WNG0wL3KQ1B/2LubfwBxQjM PnwA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=RkyT7ftZ; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79350-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79350-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id pn4-20020a056214130400b0068fe4983841si49430qvb.490.2024.02.23.16.13.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 16:13:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-79350-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=RkyT7ftZ; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-79350-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-79350-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id E6F3A1C20CEF for ; Sat, 24 Feb 2024 00:13:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 780EBDDD1; Sat, 24 Feb 2024 00:12:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RkyT7ftZ" Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29FB78C1E for ; Sat, 24 Feb 2024 00:12:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733538; cv=none; b=QO18nLxROFIcY11Xk6ITBbOtVQ2MaKxV1bOgH4027i87WmT3J7wUTqfW14lyuMQdElvF7fuhijRiPPxolSoSRUcW7VMTW2ax1SItSx9hcC1PlgMxom/cQXRLC1/05MOpFTjqMHs1ZbWNeutnpovqBpWH6bXXZkJLpMqf9u+fKT8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708733538; c=relaxed/simple; bh=cDru6YFwiqSwGLPb6WM8HxrZU5IasRlJDey6XS2L7mA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Rv9GIB7OcLN9taMXBAUt+AUR5GUTsmrLY3i9ThKZd8YiLj3I6430jiM5xiatrVNjR5ihK4q/oOxYjuVdrNp+zwXzPK/5EXjeAcs146swyBkwBW0JP9sucm+B1NoUJKlw9eqv6Mzu6HcxE/rHkKQRfvcLaxZZ/LzBVbEA2aPZyE0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RkyT7ftZ; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6047fed0132so21201347b3.1 for ; Fri, 23 Feb 2024 16:12:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1708733535; x=1709338335; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HLDw6CquW3kHDltZofc4BI/Dj8EvwP77wlptMXHUSkI=; b=RkyT7ftZAHxBHEIER1o3bmNbmJLoMuhk38JkBQkDaJzjwy2egSPolc6IIoQvuOPxqX VmLcKkSMgHJE5irzrvfh/tUorv4ISdR21QMpWjcY7izXGpMn/47TLMuSDmSBAVB7mFf9 ZoMzUKDozJKaat6GZffJ3NS33kF6oGftEO7BS7MfYLPh44lMXK7dEc9aoNvZJI+xz6dA UtxdndmkDp5RdRVRqrJa/+3qhftg3nB3+hAJGpk6KIpJK+CGRfFO2Q8nM7O5Eun9AeI1 2JMzp1nnbU6ggsC5ym7blNwBOmGu3bdrl2SW0XH3bkoz1vwbNAs4sBxBsueOnudhvJoe 1fUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708733535; x=1709338335; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HLDw6CquW3kHDltZofc4BI/Dj8EvwP77wlptMXHUSkI=; b=P/dsaR2yEzkxB4TCgGXYivaasTiyjms1jO/Mn1zsvpT22uyqPlJFqBqemyg15+/DPC jrnjByfJhBnjo4i/YLxenBW1nHojrNHPc+JC0ULO+VSqTAAOMUwf4zVxkimMvG4xHtKP wXqM6/5sWjXxFL+ej5XWA34UvzKYPHLy9Sq4GPFpyHt55k/mzxWtY+UNwaPBBDLHOZD1 dNB2uB9c0jFXlgKyei7gccmo2Q+UrX6LIXe4/UAZcKSod5rh24HPsxRN9w3EHVCRDGGy NSYeCkr5ScQLcCZUZwVqF7haItWqWk8l6vTBXSyHDtUApFc9qDNzQ3vkRjoJTBagzlMv F2qg== X-Gm-Message-State: AOJu0YxQ75V84Q7QJN0QgWwPPTjxlOWnW3sQfZmMfshH9oi25n9yHvKJ 9SBbA46pY60nxO4X/NNLq3h0x8pUfIoeyPh7+LC1aUzMKfjM+HtMtlGXgQURnenFuFut6h6G6a5 e835HRZHzg3nawOnHfgKPIpAYJ/p2uP36+ml2kmAhQO9YY26PUo2yDiPoPD2uP53ppmltFMn4sK tqg62EPYjfAw6TE8RygPeGTBkYs9Jy5dTr0giwvb7X6k16 X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a81:9e13:0:b0:607:83bd:bea3 with SMTP id m19-20020a819e13000000b0060783bdbea3mr271468ywj.10.1708733535092; Fri, 23 Feb 2024 16:12:15 -0800 (PST) Date: Fri, 23 Feb 2024 16:11:47 -0800 In-Reply-To: <20240224001153.2584030-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240224001153.2584030-1-jstultz@google.com> X-Mailer: git-send-email 2.44.0.rc0.258.g7320e95886-goog Message-ID: <20240224001153.2584030-8-jstultz@google.com> Subject: [RESEND][PATCH v8 7/7] sched: Split scheduler and execution contexts From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Xuewen Yan , K Prateek Nayak , Metin Kaya , Thomas Gleixner , kernel-team@android.com, "Connor O'Brien" , John Stultz X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1791737082482547491 X-GMAIL-MSGID: 1791737082482547491 From: Peter Zijlstra Let's define the scheduling context as all the scheduler state in task_struct for the task selected to run, and the execution context as all state required to actually run the task. Currently both are intertwined in task_struct. We want to logically split these such that we can use the scheduling context of the task selected to be scheduled, but use the execution context of a different task to actually be run. To this purpose, introduce rq_selected() macro to point to the task_struct selected from the runqueue by the scheduler, and will be used for scheduler state, and preserve rq->curr to indicate the execution context of the task that will actually be run. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Metin Kaya Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20181009092434.26221-5-juri.lelli@redhat.com [add additional comments and update more sched_class code to use rq::proxy] Signed-off-by: Connor O'Brien [jstultz: Rebased and resolved minor collisions, reworked to use accessors, tweaked update_curr_common to use rq_proxy fixing rt scheduling issues] Signed-off-by: John Stultz --- v2: * Reworked to use accessors * Fixed update_curr_common to use proxy instead of curr v3: * Tweaked wrapper names * Swapped proxy for selected for clarity v4: * Minor variable name tweaks for readability * Use a macro instead of a inline function and drop other helper functions as suggested by Peter. * Remove verbose comments/questions to avoid review distractions, as suggested by Dietmar v5: * Add CONFIG_PROXY_EXEC option to this patch so the new logic can be tested with this change * Minor fix to grab rq_selected when holding the rq lock v7: * Minor spelling fix and unused argument fixes suggested by Metin Kaya * Switch to curr_selected for consistency, and minor rewording of commit message for clarity * Rename variables selected instead of curr when we're using rq_selected() * Reduce macros in CONFIG_SCHED_PROXY_EXEC ifdef sections, as suggested by Metin Kaya v8: * Use rq->curr, not rq_selected with task_tick, as suggested by Valentin * Minor rework to reorder this with CONFIG_SCHED_PROXY_EXEC patch --- kernel/sched/core.c | 46 ++++++++++++++++++++++++++--------------- kernel/sched/deadline.c | 35 ++++++++++++++++--------------- kernel/sched/fair.c | 18 ++++++++-------- kernel/sched/rt.c | 40 +++++++++++++++++------------------ kernel/sched/sched.h | 25 ++++++++++++++++++++-- 5 files changed, 99 insertions(+), 65 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b537e5f501ea..c17f91d6ceba 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -794,7 +794,7 @@ static enum hrtimer_restart hrtick(struct hrtimer *timer) rq_lock(rq, &rf); update_rq_clock(rq); - rq->curr->sched_class->task_tick(rq, rq->curr, 1); + rq_selected(rq)->sched_class->task_tick(rq, rq->curr, 1); rq_unlock(rq, &rf); return HRTIMER_NORESTART; @@ -2238,16 +2238,18 @@ static inline void check_class_changed(struct rq *rq, struct task_struct *p, void wakeup_preempt(struct rq *rq, struct task_struct *p, int flags) { - if (p->sched_class == rq->curr->sched_class) - rq->curr->sched_class->wakeup_preempt(rq, p, flags); - else if (sched_class_above(p->sched_class, rq->curr->sched_class)) + struct task_struct *selected = rq_selected(rq); + + if (p->sched_class == selected->sched_class) + selected->sched_class->wakeup_preempt(rq, p, flags); + else if (sched_class_above(p->sched_class, selected->sched_class)) resched_curr(rq); /* * A queue event has occurred, and we're going to schedule. In * this case, we can save a useless back to back clock update. */ - if (task_on_rq_queued(rq->curr) && test_tsk_need_resched(rq->curr)) + if (task_on_rq_queued(selected) && test_tsk_need_resched(rq->curr)) rq_clock_skip_update(rq); } @@ -2774,7 +2776,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx) lockdep_assert_held(&p->pi_lock); queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) { /* @@ -5587,7 +5589,7 @@ unsigned long long task_sched_runtime(struct task_struct *p) * project cycles that may never be accounted to this * thread, breaking clock_gettime(). */ - if (task_current(rq, p) && task_on_rq_queued(p)) { + if (task_current_selected(rq, p) && task_on_rq_queued(p)) { prefetch_curr_exec_start(p); update_rq_clock(rq); p->sched_class->update_curr(rq); @@ -5655,7 +5657,8 @@ void scheduler_tick(void) { int cpu = smp_processor_id(); struct rq *rq = cpu_rq(cpu); - struct task_struct *curr = rq->curr; + /* accounting goes to the selected task */ + struct task_struct *selected; struct rq_flags rf; unsigned long thermal_pressure; u64 resched_latency; @@ -5666,16 +5669,17 @@ void scheduler_tick(void) sched_clock_tick(); rq_lock(rq, &rf); + selected = rq_selected(rq); update_rq_clock(rq); thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq)); update_thermal_load_avg(rq_clock_thermal(rq), rq, thermal_pressure); - curr->sched_class->task_tick(rq, curr, 0); + selected->sched_class->task_tick(rq, selected, 0); if (sched_feat(LATENCY_WARN)) resched_latency = cpu_resched_latency(rq); calc_global_load_tick(rq); sched_core_tick(rq); - task_tick_mm_cid(rq, curr); + task_tick_mm_cid(rq, selected); rq_unlock(rq, &rf); @@ -5684,8 +5688,8 @@ void scheduler_tick(void) perf_event_task_tick(); - if (curr->flags & PF_WQ_WORKER) - wq_worker_tick(curr); + if (selected->flags & PF_WQ_WORKER) + wq_worker_tick(selected); #ifdef CONFIG_SMP rq->idle_balance = idle_cpu(cpu); @@ -5750,6 +5754,12 @@ static void sched_tick_remote(struct work_struct *work) struct task_struct *curr = rq->curr; if (cpu_online(cpu)) { + /* + * Since this is a remote tick for full dynticks mode, + * we are always sure that there is no proxy (only a + * single task is running). + */ + SCHED_WARN_ON(rq->curr != rq_selected(rq)); update_rq_clock(rq); if (!is_idle_task(curr)) { @@ -6701,6 +6711,7 @@ static void __sched notrace __schedule(unsigned int sched_mode) } next = pick_next_task(rq, prev, &rf); + rq_set_selected(rq, next); clear_tsk_need_resched(prev); clear_preempt_need_resched(); #ifdef CONFIG_SCHED_DEBUG @@ -7201,7 +7212,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task) prev_class = p->sched_class; queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flag); if (running) @@ -7291,7 +7302,7 @@ void set_user_nice(struct task_struct *p, long nice) } queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE | DEQUEUE_NOCLOCK); if (running) @@ -7870,7 +7881,7 @@ static int __sched_setscheduler(struct task_struct *p, } queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, queue_flags); if (running) @@ -9297,6 +9308,7 @@ void __init init_idle(struct task_struct *idle, int cpu) rcu_read_unlock(); rq->idle = idle; + rq_set_selected(rq, idle); rcu_assign_pointer(rq->curr, idle); idle->on_rq = TASK_ON_RQ_QUEUED; #ifdef CONFIG_SMP @@ -9386,7 +9398,7 @@ void sched_setnuma(struct task_struct *p, int nid) rq = task_rq_lock(p, &rf); queued = task_on_rq_queued(p); - running = task_current(rq, p); + running = task_current_selected(rq, p); if (queued) dequeue_task(rq, p, DEQUEUE_SAVE); @@ -10491,7 +10503,7 @@ void sched_move_task(struct task_struct *tsk) update_rq_clock(rq); - running = task_current(rq, tsk); + running = task_current_selected(rq, tsk); queued = task_on_rq_queued(tsk); if (queued) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 1b9cdb507498..c30b592d6e9d 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1218,7 +1218,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) #endif enqueue_task_dl(rq, p, ENQUEUE_REPLENISH); - if (dl_task(rq->curr)) + if (dl_task(rq_selected(rq))) wakeup_preempt_dl(rq, p, 0); else resched_curr(rq); @@ -1442,7 +1442,7 @@ void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq, */ static void update_curr_dl(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_dl_entity *dl_se = &curr->dl; s64 delta_exec; @@ -1899,7 +1899,7 @@ static int find_later_rq(struct task_struct *task); static int select_task_rq_dl(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *selected; bool select_rq; struct rq *rq; @@ -1910,6 +1910,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) rcu_read_lock(); curr = READ_ONCE(rq->curr); /* unlocked access */ + selected = READ_ONCE(rq_selected(rq)); /* * If we are dealing with a -deadline task, we must @@ -1920,9 +1921,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) * other hand, if it has a shorter deadline, we * try to make it stay here, it might be important. */ - select_rq = unlikely(dl_task(curr)) && + select_rq = unlikely(dl_task(selected)) && (curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &curr->dl)) && + !dl_entity_preempt(&p->dl, &selected->dl)) && p->nr_cpus_allowed > 1; /* @@ -1985,7 +1986,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p) * let's hope p can move out. */ if (rq->curr->nr_cpus_allowed == 1 || - !cpudl_find(&rq->rd->cpudl, rq->curr, NULL)) + !cpudl_find(&rq->rd->cpudl, rq_selected(rq), NULL)) return; /* @@ -2024,7 +2025,7 @@ static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf) static void wakeup_preempt_dl(struct rq *rq, struct task_struct *p, int flags) { - if (dl_entity_preempt(&p->dl, &rq->curr->dl)) { + if (dl_entity_preempt(&p->dl, &rq_selected(rq)->dl)) { resched_curr(rq); return; } @@ -2034,7 +2035,7 @@ static void wakeup_preempt_dl(struct rq *rq, struct task_struct *p, * In the unlikely case current and p have the same deadline * let us try to decide what's the best thing to do... */ - if ((p->dl.deadline == rq->curr->dl.deadline) && + if ((p->dl.deadline == rq_selected(rq)->dl.deadline) && !test_tsk_need_resched(rq->curr)) check_preempt_equal_dl(rq, p); #endif /* CONFIG_SMP */ @@ -2066,7 +2067,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first) if (!first) return; - if (rq->curr->sched_class != &dl_sched_class) + if (rq_selected(rq)->sched_class != &dl_sched_class) update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0); deadline_queue_push_tasks(rq); @@ -2391,8 +2392,8 @@ static int push_dl_task(struct rq *rq) * can move away, it makes sense to just reschedule * without going further in pushing next_task. */ - if (dl_task(rq->curr) && - dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) && + if (dl_task(rq_selected(rq)) && + dl_time_before(next_task->dl.deadline, rq_selected(rq)->dl.deadline) && rq->curr->nr_cpus_allowed > 1) { resched_curr(rq); return 0; @@ -2515,7 +2516,7 @@ static void pull_dl_task(struct rq *this_rq) * deadline than the current task of its runqueue. */ if (dl_time_before(p->dl.deadline, - src_rq->curr->dl.deadline)) + rq_selected(src_rq)->dl.deadline)) goto skip; if (is_migration_disabled(p)) { @@ -2554,9 +2555,9 @@ static void task_woken_dl(struct rq *rq, struct task_struct *p) if (!task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && - dl_task(rq->curr) && + dl_task(rq_selected(rq)) && (rq->curr->nr_cpus_allowed < 2 || - !dl_entity_preempt(&p->dl, &rq->curr->dl))) { + !dl_entity_preempt(&p->dl, &rq_selected(rq)->dl))) { push_dl_tasks(rq); } } @@ -2731,12 +2732,12 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) return; } - if (rq->curr != p) { + if (rq_selected(rq) != p) { #ifdef CONFIG_SMP if (p->nr_cpus_allowed > 1 && rq->dl.overloaded) deadline_queue_push_tasks(rq); #endif - if (dl_task(rq->curr)) + if (dl_task(rq_selected(rq))) wakeup_preempt_dl(rq, p, 0); else resched_curr(rq); @@ -2765,7 +2766,7 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p, if (!rq->dl.overloaded) deadline_queue_pull_task(rq); - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { /* * If we now have a earlier deadline task than p, * then reschedule, provided p is still on this diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 533547e3c90a..dc342e1fc420 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1140,7 +1140,7 @@ static inline void update_curr_task(struct task_struct *p, s64 delta_exec) */ s64 update_curr_common(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); s64 delta_exec; delta_exec = update_curr_se(rq, &curr->se); @@ -1177,7 +1177,7 @@ static void update_curr(struct cfs_rq *cfs_rq) static void update_curr_fair(struct rq *rq) { - update_curr(cfs_rq_of(&rq->curr->se)); + update_curr(cfs_rq_of(&rq_selected(rq)->se)); } static inline void @@ -6627,7 +6627,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) s64 delta = slice - ran; if (delta < 0) { - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); return; } @@ -6642,7 +6642,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) */ static void hrtick_update(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); if (!hrtick_enabled_fair(rq) || curr->sched_class != &fair_sched_class) return; @@ -8267,7 +8267,7 @@ static void set_next_buddy(struct sched_entity *se) */ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int wake_flags) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_entity *se = &curr->se, *pse = &p->se; struct cfs_rq *cfs_rq = task_cfs_rq(curr); int cse_is_idle, pse_is_idle; @@ -8298,7 +8298,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int * prevents us from potentially nominating it as a false LAST_BUDDY * below. */ - if (test_tsk_need_resched(curr)) + if (test_tsk_need_resched(rq->curr)) return; /* Idle tasks are by definition preempted by non-idle tasks. */ @@ -9282,7 +9282,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done) * update_load_avg() can call cpufreq_update_util(). Make sure that RT, * DL and IRQ signals have been updated before updating CFS. */ - curr_class = rq->curr->sched_class; + curr_class = rq_selected(rq)->sched_class; thermal_pressure = arch_scale_thermal_pressure(cpu_of(rq)); @@ -12673,7 +12673,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio) * our priority decreased, or if we are not currently running on * this runqueue and our priority is higher than the current's */ - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { if (p->prio > oldprio) resched_curr(rq); } else @@ -12776,7 +12776,7 @@ static void switched_to_fair(struct rq *rq, struct task_struct *p) * kick off the schedule if running, otherwise just see * if we can still preempt the current task. */ - if (task_current(rq, p)) + if (task_current_selected(rq, p)) resched_curr(rq); else wakeup_preempt(rq, p, 0); diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 638e7c158ae4..48fc7a198f1a 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -530,7 +530,7 @@ static void dequeue_rt_entity(struct sched_rt_entity *rt_se, unsigned int flags) static void sched_rt_rq_enqueue(struct rt_rq *rt_rq) { - struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr; + struct task_struct *curr = rq_selected(rq_of_rt_rq(rt_rq)); struct rq *rq = rq_of_rt_rq(rt_rq); struct sched_rt_entity *rt_se; @@ -1000,7 +1000,7 @@ static int sched_rt_runtime_exceeded(struct rt_rq *rt_rq) */ static void update_curr_rt(struct rq *rq) { - struct task_struct *curr = rq->curr; + struct task_struct *curr = rq_selected(rq); struct sched_rt_entity *rt_se = &curr->rt; s64 delta_exec; @@ -1543,7 +1543,7 @@ static int find_lowest_rq(struct task_struct *task); static int select_task_rq_rt(struct task_struct *p, int cpu, int flags) { - struct task_struct *curr; + struct task_struct *curr, *selected; struct rq *rq; bool test; @@ -1555,6 +1555,7 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) rcu_read_lock(); curr = READ_ONCE(rq->curr); /* unlocked access */ + selected = READ_ONCE(rq_selected(rq)); /* * If the current task on @p's runqueue is an RT task, then @@ -1583,8 +1584,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) * systems like big.LITTLE. */ test = curr && - unlikely(rt_task(curr)) && - (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio); + unlikely(rt_task(selected)) && + (curr->nr_cpus_allowed < 2 || selected->prio <= p->prio); if (test || !rt_task_fits_capacity(p, cpu)) { int target = find_lowest_rq(p); @@ -1614,12 +1615,8 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags) static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p) { - /* - * Current can't be migrated, useless to reschedule, - * let's hope p can move out. - */ if (rq->curr->nr_cpus_allowed == 1 || - !cpupri_find(&rq->rd->cpupri, rq->curr, NULL)) + !cpupri_find(&rq->rd->cpupri, rq_selected(rq), NULL)) return; /* @@ -1662,7 +1659,9 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf) */ static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags) { - if (p->prio < rq->curr->prio) { + struct task_struct *curr = rq_selected(rq); + + if (p->prio < curr->prio) { resched_curr(rq); return; } @@ -1680,7 +1679,7 @@ static void wakeup_preempt_rt(struct rq *rq, struct task_struct *p, int flags) * to move current somewhere else, making room for our non-migratable * task. */ - if (p->prio == rq->curr->prio && !test_tsk_need_resched(rq->curr)) + if (p->prio == curr->prio && !test_tsk_need_resched(rq->curr)) check_preempt_equal_prio(rq, p); #endif } @@ -1705,7 +1704,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f * utilization. We only care of the case where we start to schedule a * rt task */ - if (rq->curr->sched_class != &rt_sched_class) + if (rq_selected(rq)->sched_class != &rt_sched_class) update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 0); rt_queue_push_tasks(rq); @@ -1977,6 +1976,7 @@ static struct task_struct *pick_next_pushable_task(struct rq *rq) BUG_ON(rq->cpu != task_cpu(p)); BUG_ON(task_current(rq, p)); + BUG_ON(task_current_selected(rq, p)); BUG_ON(p->nr_cpus_allowed <= 1); BUG_ON(!task_on_rq_queued(p)); @@ -2009,7 +2009,7 @@ static int push_rt_task(struct rq *rq, bool pull) * higher priority than current. If that's the case * just reschedule current. */ - if (unlikely(next_task->prio < rq->curr->prio)) { + if (unlikely(next_task->prio < rq_selected(rq)->prio)) { resched_curr(rq); return 0; } @@ -2362,7 +2362,7 @@ static void pull_rt_task(struct rq *this_rq) * p if it is lower in priority than the * current task on the run queue */ - if (p->prio < src_rq->curr->prio) + if (p->prio < rq_selected(src_rq)->prio) goto skip; if (is_migration_disabled(p)) { @@ -2404,9 +2404,9 @@ static void task_woken_rt(struct rq *rq, struct task_struct *p) bool need_to_push = !task_on_cpu(rq, p) && !test_tsk_need_resched(rq->curr) && p->nr_cpus_allowed > 1 && - (dl_task(rq->curr) || rt_task(rq->curr)) && + (dl_task(rq_selected(rq)) || rt_task(rq_selected(rq))) && (rq->curr->nr_cpus_allowed < 2 || - rq->curr->prio <= p->prio); + rq_selected(rq)->prio <= p->prio); if (need_to_push) push_rt_tasks(rq); @@ -2490,7 +2490,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p) if (p->nr_cpus_allowed > 1 && rq->rt.overloaded) rt_queue_push_tasks(rq); #endif /* CONFIG_SMP */ - if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq))) + if (p->prio < rq_selected(rq)->prio && cpu_online(cpu_of(rq))) resched_curr(rq); } } @@ -2505,7 +2505,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) if (!task_on_rq_queued(p)) return; - if (task_current(rq, p)) { + if (task_current_selected(rq, p)) { #ifdef CONFIG_SMP /* * If our priority decreases while running, we @@ -2531,7 +2531,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio) * greater than the current running task * then reschedule. */ - if (p->prio < rq->curr->prio) + if (p->prio < rq_selected(rq)->prio) resched_curr(rq); } } diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index c83e5e0672dc..808d6ee8ae33 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1030,7 +1030,7 @@ struct rq { */ unsigned int nr_uninterruptible; - struct task_struct __rcu *curr; + struct task_struct __rcu *curr; /* Execution context */ struct task_struct *idle; struct task_struct *stop; unsigned long next_balance; @@ -1225,6 +1225,13 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) +/* For now, rq_selected == rq->curr */ +#define rq_selected(rq) ((rq)->curr) +static inline void rq_set_selected(struct rq *rq, struct task_struct *t) +{ + /* Do nothing */ +} + struct sched_group; #ifdef CONFIG_SCHED_CORE static inline struct cpumask *sched_group_span(struct sched_group *sg); @@ -2148,11 +2155,25 @@ static inline u64 global_rt_runtime(void) return (u64)sysctl_sched_rt_runtime * NSEC_PER_USEC; } +/* + * Is p the current execution context? + */ static inline int task_current(struct rq *rq, struct task_struct *p) { return rq->curr == p; } +/* + * Is p the current scheduling context? + * + * Note that it might be the current execution context at the same time if + * rq->curr == rq_selected() == p. + */ +static inline int task_current_selected(struct rq *rq, struct task_struct *p) +{ + return rq_selected(rq) == p; +} + static inline int task_on_cpu(struct rq *rq, struct task_struct *p) { #ifdef CONFIG_SMP @@ -2322,7 +2343,7 @@ struct sched_class { static inline void put_prev_task(struct rq *rq, struct task_struct *prev) { - WARN_ON_ONCE(rq->curr != prev); + WARN_ON_ONCE(rq_selected(rq) != prev); prev->sched_class->put_prev_task(rq, prev); }