Message ID | 20240115143743.27827-1-anna-maria@linutronix.de |
---|---|
Headers |
Return-Path: <linux-kernel+bounces-26097-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:693c:2614:b0:101:6a76:bbe3 with SMTP id mm20csp1743133dyc; Mon, 15 Jan 2024 06:39:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IE5GM9a1baZI+Ee9kJ6/HG0baV/uJLdmMRD1xSNxILGdxcMHQxTU6dFBum2fcgNajgs9zLe X-Received: by 2002:a17:90a:df91:b0:28d:34bf:fe09 with SMTP id p17-20020a17090adf9100b0028d34bffe09mr9449183pjv.4.1705329563503; Mon, 15 Jan 2024 06:39:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1705329563; cv=none; d=google.com; s=arc-20160816; b=fqFqgPLurUq2U91pdK/A4HMb+vVo/bdTr28UkBHnutG9eiOrh/X4JkQPXqCun4q7z9 5FLCYQ4ar8wYTOutjgBatJ5QNESW+WAVrW83QRGduRtwYs5jiL0k923xv+Wa3MQPzlN6 QA7qa3Do7wPjy7yYi5uRKD9FK3UjWzvSelXNns12nvg8Tsm/ltjVeoLoWGeEe7uhHd4E hQeVk95UQWFx3yfQ6fYV3MzZ1G3wbqPBFpnMpznx9OA0L3WfSozDTwUkEP7xTtpe+u9W qoh+BxER7Ff3/OZc/TJHVq/0P1crKUVTtIMY8mq5qDfXxehdPDCBgvxIYfje26taF6Ca Yr6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :dkim-signature:dkim-signature:from; bh=HL8/FIrmak+PbgDcxh0XefuJmF2bYuW9UqqPFeIrMP8=; fh=PG6uS4TiiUSDyl8D/joYkWbwCgDm4ug0ir2h7tHBJXQ=; b=jDjoGrIDP8k8FZTNlNaJxg4bMvFHP7Yf7lsBNGGInnmPONZzP1jFvveFfqbIKmYgJD W+xT3kJZdhHO8yobGK7Ig3J0WREvcrKOHNWlZmyvgQ7AX56rSp+HhLP7vXdXt17ExWPx wHsWkelqe7+zyLqokK/J5K3BhbPJJwwgF6TIB2lyQ53svv9zF2+YKOntx2uyKo93nFCm mpvO+zMJAux+kFkx8u+b7LjlPEn/XGeBkc27DZk8CWKIC8+1/0pt/KI8mTtdK1y3pZFS 8OEqeFpCb69xOCsDd6Q2IZzyjgsAxgyaK64dlWzys76Koc4fiaTBgU5S6kFspBGQpcay 3YrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=AlrbNIyM; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; spf=pass (google.com: domain of linux-kernel+bounces-26097-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-26097-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id x21-20020a17090aca1500b0028d29091591si12140301pjt.144.2024.01.15.06.39.23 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Jan 2024 06:39:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-26097-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=AlrbNIyM; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e; spf=pass (google.com: domain of linux-kernel+bounces-26097-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-26097-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4777F2828A1 for <ouuuleilei@gmail.com>; Mon, 15 Jan 2024 14:39:23 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C23FF18026; Mon, 15 Jan 2024 14:38:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="AlrbNIyM"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="7ePDnipz" Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9767F17735 for <linux-kernel@vger.kernel.org>; Mon, 15 Jan 2024 14:38:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de From: Anna-Maria Behnsen <anna-maria@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1705329482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=HL8/FIrmak+PbgDcxh0XefuJmF2bYuW9UqqPFeIrMP8=; b=AlrbNIyMB+XttPoxGJq1+UgJDq31jjd81BVzR4LWHehavqbVQaqMHU0+PRexuUUIv5BaYa FARbSJHSPct3jP1cA+E+hDV2wyKeWWJVzq79bXbgklVjJm35LxfJu5Fy0xkl0cQztZ/GoJ uxBvrqGqOe76bDwcLIWH5vno2HQPUQWTwPmiCDo4y2G33XxCw4K4tDJ941cGs7rUSPBn1a w+nxfBczTRNFUMYz1htrN47QlHo0p3tyhSxxMvfPH9BCnaTfcwhZWB7lCcbU0jF7IJ11iT riqTKp58P7Do/n4sjur/17POcUcF5dea3v83dkTZd7XsZJPehUvdp02djEpXWg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1705329482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=HL8/FIrmak+PbgDcxh0XefuJmF2bYuW9UqqPFeIrMP8=; b=7ePDnipzXAYHOaGmIbZXlm93xRCdacqTBZu9lJwQJbvj6WUerqH0sBopz+rd4NBVhfRdrZ cHGY2CvowbntV4Bg== To: linux-kernel@vger.kernel.org Cc: Peter Zijlstra <peterz@infradead.org>, John Stultz <jstultz@google.com>, Thomas Gleixner <tglx@linutronix.de>, Eric Dumazet <edumazet@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, Arjan van de Ven <arjan@infradead.org>, "Paul E . McKenney" <paulmck@kernel.org>, Frederic Weisbecker <frederic@kernel.org>, Rik van Riel <riel@surriel.com>, Steven Rostedt <rostedt@goodmis.org>, Sebastian Siewior <bigeasy@linutronix.de>, Giovanni Gherdovich <ggherdovich@suse.cz>, Lukasz Luba <lukasz.luba@arm.com>, "Gautham R . Shenoy" <gautham.shenoy@amd.com>, Srinivas Pandruvada <srinivas.pandruvada@intel.com>, K Prateek Nayak <kprateek.nayak@amd.com>, Anna-Maria Behnsen <anna-maria@linutronix.de> Subject: [PATCH v10 00/20] timers: Move from a push remote at enqueue to a pull at expiry model Date: Mon, 15 Jan 2024 15:37:23 +0100 Message-Id: <20240115143743.27827-1-anna-maria@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1788167652089091517 X-GMAIL-MSGID: 1788167652089091517 |
Series |
timers: Move from a push remote at enqueue to a pull at expiry model
|
|
Message
Anna-Maria Behnsen
Jan. 15, 2024, 2:37 p.m. UTC
Hi, the cleanup patches are already applied and so the contains only two parts: - Patches 1 - 4: timer base idle marking rework with two preparatory changes. See the section below for more details. - Patches 5 - 20: Updated timer pull model on top of timer idle rework The queue is available here: git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel timers/pushpull Move marking timer bases as idle into tick_nohz_stop_tick() ----------------------------------------------------------- The idle marking of timer bases is done in get_next_timer_interrupt() whenever possible. The timer bases are idle, even if the tick will not be stopped. This lead to an IPI when a new first timer is enqueued remote. To prevent this, setting timer_base->in_idle flag is postponed to tick_nohz_stop_tick(). Furthermore this synchronizes the states of timer base is_idle and tick_stopped. With the timer pull model in place, also the idle state in the hierarchy of a CPU is synchronized with the other idle related states. Timer pull model ---------------- Placing timers at enqueue time on a target CPU based on dubious heuristics does not make any sense: 1) Most timer wheel timers are canceled or rearmed before they expire. 2) The heuristics to predict which CPU will be busy when the timer expires are wrong by definition. So placing the timers at enqueue wastes precious cycles. The proper solution to this problem is to always queue the timers on the local CPU and allow the non pinned timers to be pulled onto a busy CPU at expiry time. Therefore split the timer storage into local pinned and global timers: Local pinned timers are always expired on the CPU on which they have been queued. Global timers can be expired on any CPU. As long as a CPU is busy it expires both local and global timers. When a CPU goes idle it arms for the first expiring local timer. If the first expiring pinned (local) timer is before the first expiring movable timer, then no action is required because the CPU will wake up before the first movable timer expires. If the first expiring movable timer is before the first expiring pinned (local) timer, then this timer is queued into a idle timerqueue and eventually expired by some other active CPU. To avoid global locking the timerqueues are implemented as a hierarchy. The lowest level of the hierarchy holds the CPUs. The CPUs are associated to groups of 8, which are separated per node. If more than one CPU group exist, then a second level in the hierarchy collects the groups. Depending on the size of the system more than 2 levels are required. Each group has a "migrator" which checks the timerqueue during the tick for remote timers to be expired. If the last CPU in a group goes idle it reports the first expiring event in the group up to the next group(s) in the hierarchy. If the last CPU goes idle it arms its timer for the first system wide expiring timer to ensure that no timer event is missed. Testing ~~~~~~~ Enqueue ^^^^^^^ The impact of wasting cycles during enqueue by using the heuristic in contrast to always queuing the timer on the local CPU was measured with a micro benchmark. Therefore a timer is enqueued and dequeued in a loop with 1000 repetitions on a isolated CPU. The time the loop takes is measured. A quarter of the remaining CPUs was kept busy. This measurement was repeated several times. With the patch queue the average duration was reduced by approximately 25%. 145ns plain v6 109ns v6 with patch queue Furthermore the impact of residence in deep idle states of an idle system was investigated. The patch queue doesn't downgrade this behavior. dbench test ^^^^^^^^^^^ A dbench test starting X pairs of client servers are used to create load on the system. The measurable value is the throughput. The tests were executed on a zen3 machine. The base is the tip tree branch timers/core which is based on a v6.6-rc1. governor menu NR timers/core pull-model impact ---------------------------------------------- 1 353.19 (0.19) 353.45 (0.30) 0.07% 2 700.10 (0.96) 687.00 (0.20) -1.87% 4 1329.37 (0.63) 1282.91 (0.64) -3.49% 8 2561.16 (1.28) 2493.56 (1.76) -2.64% 16 4959.96 (0.80) 4914.59 (0.64) -0.91% 32 9741.92 (3.44) 8979.83 (1.13) -7.82% 64 16535.40 (2.84) 16388.47 (4.02) -0.89% 128 22136.83 (2.42) 23174.50 (1.43) 4.69% 256 39256.77 (4.48) 38994.00 (0.39) -0.67% 512 36799.03 (1.83) 38091.10 (0.63) 3.51% 1024 32903.03 (0.86) 35370.70 (0.89) 7.50% governor teo NR timers/core pull-model impact ---------------------------------------------- 1 350.83 (1.27) 352.45 (0.96) 0.46% 2 699.52 (0.85) 690.10 (0.54) -1.35% 4 1339.53 (1.99) 1294.71 (2.71) -3.35% 8 2574.10 (0.76) 2495.46 (1.97) -3.06% 16 4898.50 (1.74) 4783.06 (1.64) -2.36% 32 9115.50 (4.63) 9037.83 (1.58) -0.85% 64 16663.90 (3.80) 16042.00 (1.72) -3.73% 128 25044.93 (1.11) 23250.03 (1.08) -7.17% 256 38059.53 (1.70) 39658.57 (2.98) 4.20% 512 36369.30 (0.39) 38890.13 (0.36) 6.93% 1024 33956.83 (1.14) 35514.83 (0.29) 4.59% Ping Pong Oberservation ^^^^^^^^^^^^^^^^^^^^^^^ During testing on a mostly idle machine a ping pong game could be observed: a process_timeout timer is expired remotely on a non idle CPU. Then the CPU where the schedule_timeout() was executed to enqueue the timer comes out of idle and restarts the timer using schedule_timeout() and goes back to idle again. This is due to the fair scheduler which tries to keep the task on the CPU which it previously executed on. Possible Next Steps ~~~~~~~~~~~~~~~~~~~ Simple deferrable timers are no longer required as they can be converted to global timers. If a CPU goes idle, a formerly deferrable timer will not prevent the CPU to sleep as long as possible. Only the last migrator CPU has to take care of them. Deferrable timers with timer pinned flags needs to be expired on the specified CPU but must not prevent CPU from going idle. They require their own timer base which is never taken into account when calculating the next expiry time. This conversation and required cleanup will be done in a follow up series. v9..v10: https://lore.kernel.org/r/20231201092654.34614-1-anna-maria@linutronix.de/ - Address review Feedback of Bigeasy v8..v9: https://lore.kernel.org/r/20231004123454.15691-1-anna-maria@linutronix.de - Address review feedback - Add more minor cleanup fixes - fixes inconsistent idle related states v7..v8: https://lore.kernel.org/r/20230524070629.6377-1-anna-maria@linutronix.de - Address review feedback - Move marking timer base idle into tick_nohz_stop_tick() - Look ahead function to determine possible sleep lenght v6..v7: - Address review feedback of Frederic and bigeasy - Change lock, unlock fetch next timer interrupt logic after remote expiry - Move timer_expire_remote() into tick-internal.h - Add documentation section about "Required event and timerqueue update after remote expiry" - Fix fallout of kernel test robot v5..v6: - Address review of Frederic Weisbecker and Peter Zijlstra (spelling, locking, race in tmigr_handle_remote_cpu()) - unconditionally set TIMER_PINNED flag in add_timer_on(); introduce add_timer() variants which set/unset TIMER_PINNED flag; drop fixing add_timer_on() call sites, as TIMER_PINNED flag is set implicitly; Fixing workqueue to use add_timer_global() instead of simply add_timer() for unbound work. - Drop support for siblings to end up in the same level 0 group (could be added again in a better way as an improvement later on) - Do not send IPI for new first deferrable timers v4..v5: - address review feedback of Frederic Weisbecker - fix issue with group timer update after remote expiry v3..v4: - address review feedback of Frederic Weisbecker - address kernel test robot fallout - Move patch 16 "add_timer_on(): Make sure callers have TIMER_PINNED flag" at the begin of the queue to prevent timers to end up in global timer base when they were queued using add_timer_on() - Fix some comments and typos v2..v3: https://lore.kernel.org/r/20170418111102.490432548@linutronix.de/ - Minimize usage of locks by storing data using atomic_cmpxchg() for migrator information and information about active cpus. Thanks, Anna-Maria Anna-Maria Behnsen (18): timers: Restructure get_next_timer_interrupt() timers: Split out get next timer interrupt timers: Move marking timer bases idle into tick_nohz_stop_tick() timers: Optimization for timer_base_try_to_set_idle() timers: Introduce add_timer() variants which modify timer flags workqueue: Use global variant for add_timer() timers: add_timer_on(): Make sure TIMER_PINNED flag is set timers: Ease code in run_local_timers() timers: Split next timer interrupt logic timers: Keep the pinned timers separate from the others timers: Retrieve next expiry of pinned/non-pinned timers separately timers: Split out "get next timer interrupt" functionality timers: Add get next timer interrupt functionality for remote CPUs timers: Check if timers base is handled already timers: Introduce function to check timer base is_idle flag timers: Implement the hierarchical pull model timer_migration: Add tracepoints timers: Always queue timers on the local CPU Richard Cochran (linutronix GmbH) (2): timers: Restructure internal locking tick/sched: Split out jiffies update helper function MAINTAINERS | 1 + include/linux/cpuhotplug.h | 1 + include/linux/timer.h | 16 +- include/trace/events/timer_migration.h | 297 +++++ kernel/time/Makefile | 3 + kernel/time/tick-internal.h | 14 + kernel/time/tick-sched.c | 65 +- kernel/time/timer.c | 505 +++++-- kernel/time/timer_migration.c | 1693 ++++++++++++++++++++++++ kernel/time/timer_migration.h | 147 ++ kernel/workqueue.c | 2 +- 11 files changed, 2629 insertions(+), 115 deletions(-) create mode 100644 include/trace/events/timer_migration.h create mode 100644 kernel/time/timer_migration.c create mode 100644 kernel/time/timer_migration.h
Comments
On Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen wrote: > +/** > + * tmigr_quick_check() - Quick forecast of next tmigr event when CPU wants to > + * go idle > + * > + * Returns KTIME_MAX, when it is probable that nothing has to be done (not the > + * only one in the level 0 group; and if it is the only one in level 0 group, > + * but there are more than a single group active in top level) > + * > + * Returns first expiry of the top level group, when it is the only one in level > + * 0 and top level also only has a single active child. > + */ > +u64 tmigr_quick_check(void) > +{ > + struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > + struct tmigr_group *topgroup; > + struct list_head lvllist; > + > + if (tmigr_is_not_available(tmc)) > + return KTIME_MAX; Offline CPUs are supposed to handle their own global timers. So instead of returning KTIME_MAX here, shouldn't we pass instead tevt->global as a parameter and return that value? Otherwise the quick check will simply ignore the next global event of this CPU if it's before the next local event. > + > + if (WARN_ON_ONCE(tmc->idle)) > + return KTIME_MAX; Same here I guess... > + > + if (!tmigr_check_migrator_and_lonely(tmc->tmgroup, tmc->childmask)) > + return KTIME_MAX; This one makes sense. > + > + for (int i = tmigr_hierarchy_levels; i > 0 ; i--) { > + lvllist = tmigr_level_list[i - 1]; > + if (list_is_singular(&lvllist)) { > + topgroup = list_first_entry(&lvllist, struct > tmigr_group, list); Is it safe against concurrent allocation failure in hotplug? If the list is seen singular, then concurrently a CPU comes up and creates/add a new group. The current CPU actually fetches it instead of the current group because it's not singular anymore. But then some higher level group allocation fails and the newly added first entry is removed. list_is_singular() looks safe. But list_first_entry isn't. You can create list_first_entry_rcu: #define list_first_entry_rcu(ptr, type, member) \ list_entry_rcu((ptr)->next, type, member) Protected inside rcu_read_lock() until the below READ_ONCE(). And then use list_del_rcu/list_add_rcu/kfree_rcu on the update side. Isn't it possible to walk through group->parent instead? > + > + if (tmigr_check_lonely(topgroup)) > + return READ_ONCE(topgroup->next_expiry); > + } else { > + continue; > + } > + } > + > + return KTIME_MAX; I'm less sure about that return value. Thanks.
On Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen wrote: > + * Protection of the tmigr group state information: > + * ------------------------------------------------ > + * > + * The state information with the list of active children and migrator needs to > + * be protected by a sequence counter. It prevents a race when updates in child > + * groups are propagated in changed order. The state update is performed > + * lockless and group wise. The following scenario describes what happens > + * without updating the sequence counter: > + * > + * Therefore, let's take three groups and four CPUs (CPU2 and CPU3 as well > + * as GRP0:1 will not change during the scenario): > + * > + * LVL 1 [GRP1:0] > + * migrator = GRP0:1 > + * active = GRP0:0, GRP0:1 > + * / \ > + * LVL 0 [GRP0:0] [GRP0:1] > + * migrator = CPU0 migrator = CPU2 > + * active = CPU0 active = CPU2 > + * / \ / \ > + * CPUs 0 1 2 3 > + * active idle active idle > + * > + * > + * 1. CPU0 goes idle. As the update is performed group wise, in the first step > + * only GRP0:0 is updated. The update of GRP1:0 is pending as CPU0 has to > + * walk the hierarchy. > + * > + * LVL 1 [GRP1:0] > + * migrator = GRP0:1 > + * active = GRP0:0, GRP0:1 > + * / \ > + * LVL 0 [GRP0:0] [GRP0:1] > + * --> migrator = TMIGR_NONE migrator = CPU2 > + * --> active = active = CPU2 > + * / \ / \ > + * CPUs 0 1 2 3 > + * --> idle idle active idle > + * > + * 2. While CPU0 goes idle and continues to update the state, CPU1 comes out of > + * idle. CPU1 updates GRP0:0. The update for GRP1:0 is pending as CPU1 also > + * has to the hierarchy. Both CPUs (CPU0 and CPU1) now walk the hierarchy to > + * perform the needed update from their point of view. The currently visible > + * state looks the following: > + * > + * LVL 1 [GRP1:0] > + * migrator = GRP0:1 > + * active = GRP0:0, GRP0:1 > + * / \ > + * LVL 0 [GRP0:0] [GRP0:1] > + * --> migrator = CPU1 migrator = CPU2 > + * --> active = CPU1 active = CPU2 > + * / \ / \ > + * CPUs 0 1 2 3 > + * idle --> active active idle > + * > + * 3. Here is the race condition: CPU1 managed to propagate its changes (from > + * step 2) through the hierarchy to GRP1:0 before CPU0 (step 1) did. The > + * active members of GRP1:0 remain unchanged after the update since it is > + * still valid from CPU1 current point of view: > + * > + * LVL 1 [GRP1:0] > + * --> migrator = GRP0:1 > + * --> active = GRP0:0, GRP0:1 > + * / \ > + * LVL 0 [GRP0:0] [GRP0:1] > + * migrator = CPU1 migrator = CPU2 > + * active = CPU1 active = CPU2 > + * / \ / \ > + * CPUs 0 1 2 3 > + * idle active active idle So let's take this scenario and suppose we are at this stage. CPU 1 has propagated the state up to [GRP1:0] and CPU 0 is going to do it but hasn't yet. > +static bool tmigr_inactive_up(struct tmigr_group *group, > + struct tmigr_group *child, > + void *ptr) > +{ > + union tmigr_state curstate, newstate, childstate; > + struct tmigr_walk *data = ptr; > + bool walk_done; > + u8 childmask; > + > + childmask = data->childmask; > + curstate.state = atomic_read(&group->migr_state); And now suppose CPU 0 arrives here and sees the group->migr_state change performed by CPU 1. So it's all good, right? The below atomic_cmpxchg() will success on the first take. > + childstate.state = 0; > + > + do { > + if (child) > + childstate.state = atomic_read(&child->migr_state); But then how do you guarantee that CPU 0 will load here the version of child->migr_state modified by CPU 1? What prevents from loading the stale value? The one that was modified by CPU 0 instead? Nothing because the two above reads are unordered. As a result, CPU 0 may ignore the fact that CPU 1 is up and wrongly report GRP0:0 as active up to GRP1:0. One way to solve this is to change the above atomic_read(&group->migr_state) into atomic_read_acquire(&group->migr_state). It's cheap and pairs with the order enforced by the upwards successful cmpxchg calls. > + > + newstate = curstate; > + walk_done = true; > + > + /* Reset active bit when the child is no longer active */ > + if (!childstate.active) > + newstate.active &= ~childmask; > + > + if (newstate.migrator == childmask) { > + /* > + * Find a new migrator for the group, because the child > + * group is idle! > + */ > + if (!childstate.active) { > + unsigned long new_migr_bit, active = newstate.active; > + > + new_migr_bit = find_first_bit(&active, BIT_CNT); > + > + if (new_migr_bit != BIT_CNT) { > + newstate.migrator = BIT(new_migr_bit); > + } else { > + newstate.migrator = TMIGR_NONE; > + > + /* Changes need to be propagated */ > + walk_done = false; > + } > + } > + } > + > + newstate.seq++; > + > + WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active)); > + > + } while (!atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state)); Similarly, I seem to remember that a failing cmpxchg() doesn't imply a full memory barrier. If it's the case, you may need to reload &group->migr_state using an acquire barrier. But lemme check that... Thanks.
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +static bool tmigr_inactive_up(struct tmigr_group *group, > + struct tmigr_group *child, > + void *ptr) > +{ > + union tmigr_state curstate, newstate, childstate; > + struct tmigr_walk *data = ptr; > + bool walk_done; > + u8 childmask; > + > + childmask = data->childmask; > + curstate.state = atomic_read(&group->migr_state); > + childstate.state = 0; > + > + do { So I got the confirmation from Boqun (+Cc) and Paul that a failing cmpxchg may not order the load of the old value against subsequent loads. And that may apply to atomic_try_cmpxchg() as well. Therefore you not only need to turn group->migr_state read into an atomic_read_acquire() but you also need to do this on each iteration of this loop. For example you can move the read_acquire right here. Thanks. > + if (child) > + childstate.state = atomic_read(&child->migr_state); > + > + newstate = curstate; > + walk_done = true; > + > + /* Reset active bit when the child is no longer active */ > + if (!childstate.active) > + newstate.active &= ~childmask; > + > + if (newstate.migrator == childmask) { > + /* > + * Find a new migrator for the group, because the child > + * group is idle! > + */ > + if (!childstate.active) { > + unsigned long new_migr_bit, active = newstate.active; > + > + new_migr_bit = find_first_bit(&active, BIT_CNT); > + > + if (new_migr_bit != BIT_CNT) { > + newstate.migrator = BIT(new_migr_bit); > + } else { > + newstate.migrator = TMIGR_NONE; > + > + /* Changes need to be propagated */ > + walk_done = false; > + } > + } > + } > + > + newstate.seq++; > + > + WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active)); > + > + } while (!atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state));
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +int tmigr_requires_handle_remote(void) > +{ > + struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > + struct tmigr_remote_data data; > + unsigned int ret = 0; > + unsigned long jif; > + > + if (tmigr_is_not_available(tmc)) > + return ret; > + > + data.now = get_jiffies_update(&jif); > + data.childmask = tmc->childmask; > + data.firstexp = KTIME_MAX; > + data.tmc_active = !tmc->idle; > + data.check = false; > + > + /* > + * If the CPU is active, walk the hierarchy to check whether a remote > + * expiry is required. > + * > + * Check is done lockless as interrupts are disabled and @tmc->idle is > + * set only by the local CPU. > + */ > + if (!tmc->idle) { > + __walk_groups(&tmigr_requires_handle_remote_up, &data, tmc); > + > + if (data.firstexp != KTIME_MAX) > + ret = 1; > + > + return ret; > + } > + > + /* > + * If the CPU is idle, check whether the recalculation of @tmc->wakeup > + * is required. @tmc->wakeup_recalc is set, when the last active CPU > + * went offline. The last active CPU delegated the handling of the timer > + * migration hierarchy to another (this) CPU by updating this flag and > + * sending a reschedule. > + * > + * Racy lockless check is valid: > + * - @tmc->wakeup_recalc is set by the remote CPU before it issues > + * reschedule IPI. > + * - As interrupts are disabled here this CPU will either observe > + * @tmc->wakeup_recalc set before the reschedule IPI can be handled or > + * it will observe it when this function is called again on return > + * from handling the reschedule IPI. > + */ > + if (tmc->wakeup_recalc) { > + __walk_groups(&tmigr_requires_handle_remote_up, &data, tmc); > + > + if (data.firstexp != KTIME_MAX) > + ret = 1; > + > + raw_spin_lock(&tmc->lock); > + WRITE_ONCE(tmc->wakeup, data.firstexp); > + tmc->wakeup_recalc = false; > + raw_spin_unlock(&tmc->lock); Suppose we have: [GRP1:0] migrator = GRP0:1 active = GRP0:0, GRP0:1 / \ [GRP0:0] [GRP0:1] migrator = CPU 1 migrator = CPU 3 active = CPU 1 active = CPU 3 / \ / \ CPUs 0 1 2 3 idle active idle active CPU 0 and CPU 2 have no timer. CPU 1 has a timer in a few millisecs. [GRP1:0] migrator = GRP0:1 active = GRP0:1 / \ [GRP0:0] [GRP0:1] migrator = NONE migrator = CPU 3 active = NONE active = CPU 3 / \ / \ CPUs 0 1 2 3 idle idle idle active CPU 1 went idle, CPU 3 will take care of CPU 1's timer. Then come two things happening at the same time: CPU 0 has a timer interrupt, due to RCU callbacks handling for example, and CPU 3 goes offline: CPU 0 CPU 3 ----- ----- // On top level [GRP1:0], just set migrator = TMIGR_NONE tmigr_inactive_up() { cpu = cpumask_any_but(cpu_online_mask, cpu); //cpu == 0 tmc_resched = per_cpu_ptr(&tmigr_cpu, CPU 0); raw_spin_lock(&tmc_resched->lock); tmc_resched->wakeup_recalc = true; raw_spin_unlock(&tmc_resched->lock); // timer interrupt run_local_timers() { tmigr_requires_handle_remote() { data.firstexp = KTIME_MAX; // CPU 0 sees the tmc_resched->wakeup_recalc // latest update if (tmc->wakeup_recalc) { tmigr_requires_handle_remote_up() { // CPU 0 doesn't see GRP0:0 // latest update from CPU 1, // because it has no locking // and does a racy check. if (!tmigr_check_migrator(group, childmask)) return true; } raw_spin_lock(&tmc->lock); WRITE_ONCE(tmc->wakeup, data.firstexp); tmc->wakeup_recalc = false; raw_spin_unlock(&tmc->lock) return 0; } // IPI is sent only now smp_send_reschedule(cpu); } There is nothing that prevents CPU 0 from not seeing the hierarchy updates from other CPUs since it checks the migrators in a racy way. As a result the timer of CPU 1 may be ignored by CPU 0. You'd need to lock the tmc while calling tmigr_requires_handle_remote_up(), so that CPU 0 "inherits" everything that CPU 3 has seen, and that includes changes from CPU 1. But I see that tmigr_cpu_new_timer() does it right. Wouldn't it be possible to exlusively let tmigr_cpu_new_timer() handle the wakeup_recalc thing? This is going to be called after the end of the IRQ (whether timer interrupt or sched IPI) in any case. Thanks.
Frederic Weisbecker <frederic@kernel.org> writes: > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : >> +static bool tmigr_inactive_up(struct tmigr_group *group, >> + struct tmigr_group *child, >> + void *ptr) >> +{ >> + union tmigr_state curstate, newstate, childstate; >> + struct tmigr_walk *data = ptr; >> + bool walk_done; >> + u8 childmask; >> + >> + childmask = data->childmask; >> + curstate.state = atomic_read(&group->migr_state); >> + childstate.state = 0; >> + >> + do { > > So I got the confirmation from Boqun (+Cc) and Paul that a failing cmpxchg > may not order the load of the old value against subsequent loads. And > that may apply to atomic_try_cmpxchg() as well. > > Therefore you not only need to turn group->migr_state read into > an atomic_read_acquire() but you also need to do this on each iteration > of this loop. For example you can move the read_acquire right here. I tried to read and understand more about the memory barriers especially the acquire/release stuff. So please correct me whenever I'm wrong. We have to make sure that the child/group state values contain the last updates and prevent reordering to be able to rely on those values. So I understand, that we need the atomic_read_acquire() here for the child state, because we change the group state accordingly and need to make sure, that it contains the last update of it. The cmpxchg which writes the child state is (on success) a full memory barrier. And the atomic_read_acquire() makes sure all preceding "critical sections" (which ends with the full memory barrier) are visible. Is this right? To make sure the proper states are used, atomic_read_acquire() is then also required in: - tmigr_check_migrator() - tmigr_check_migrator_and_lonely() - tmigr_check_lonely() - tmigr_new_timer_up() (for childstate and groupstate) - tmigr_connect_child_parent() Right? Regarding the pairing of acquire: What happens when two atomic_read_acquire() are executed afterwards without pairing 1:1 with a release or stronger memory barrier? Now I want to understand the case for the group state here and also in active_up path. When reading it without acquire, it is possible, that not all changes are visible due to reordering,... . But then the worst outcome would be that the cmpxchg fails and the loop has to be done once more? Is this right? I know that memory barriers are not for free and redo the loop is also not for free. But I don't know which of both is worse. At least in inactive_up() path, we are not in the critical path. In active_up() it would be good to take the less expensive option. I want to understand the atomic_try_cmpxchg_acquire() variant: The Read is an acquire, so even if the compare/write fails, the value which is handed back is the one which was update last with a succesful cmpxchg and then we can rely on this value? Thanks a lot in advance for the help to understand this topic a little better! Anna-Maria > > Thanks. > >> + if (child) >> + childstate.state = atomic_read(&child->migr_state); >> + >> + newstate = curstate; >> + walk_done = true; >> + >> + /* Reset active bit when the child is no longer active */ >> + if (!childstate.active) >> + newstate.active &= ~childmask; >> + >> + if (newstate.migrator == childmask) { >> + /* >> + * Find a new migrator for the group, because the child >> + * group is idle! >> + */ >> + if (!childstate.active) { >> + unsigned long new_migr_bit, active = newstate.active; >> + >> + new_migr_bit = find_first_bit(&active, BIT_CNT); >> + >> + if (new_migr_bit != BIT_CNT) { >> + newstate.migrator = BIT(new_migr_bit); >> + } else { >> + newstate.migrator = TMIGR_NONE; >> + >> + /* Changes need to be propagated */ >> + walk_done = false; >> + } >> + } >> + } >> + >> + newstate.seq++; >> + >> + WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active)); >> + >> + } while (!atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state));
On Sat, Jan 27, 2024 at 11:54:46PM +0100, Frederic Weisbecker wrote: > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > > +static bool tmigr_inactive_up(struct tmigr_group *group, > > + struct tmigr_group *child, > > + void *ptr) > > +{ > > + union tmigr_state curstate, newstate, childstate; > > + struct tmigr_walk *data = ptr; > > + bool walk_done; > > + u8 childmask; > > + > > + childmask = data->childmask; > > + curstate.state = atomic_read(&group->migr_state); > > + childstate.state = 0; > > + > > + do { > > So I got the confirmation from Boqun (+Cc) and Paul that a failing cmpxchg > may not order the load of the old value against subsequent loads. And > that may apply to atomic_try_cmpxchg() as well. Plus we checked with Mark Rutland, who agreed and who went further kindly submitted a patch to clarify the documentation: https://lore.kernel.org/lkml/20240129122250.1086874-1-mark.rutland@arm.com/ Here is an LKMM litmus test demonstrating that failing cmpxchg() does not provide ordering: ------------------------------------------------------------------------ C cmpxchg-fail-order {} P0(int *x, int *y, int *z) { int r0; int r1; WRITE_ONCE(*x, 1); r1 = cmpxchg(z, 1, 0); r0 = READ_ONCE(*y); } P1(int *x, int *y, int *z) { int r0; int r1; WRITE_ONCE(*y, 1); r1 = cmpxchg(z, 1, 0); r0 = READ_ONCE(*x); } locations[0:r1;1:r1] exists (0:r0=0 /\ 1:r0=0) ------------------------------------------------------------------------ Yes, this is cmpxchg() rather than atomic_cmpxchg(), but both have the same ordering properties. Here P0() is one thread and P1() the other. Their parameters are the global shared variables. The "{}" preceding P0() is where initialization could be placed, but this test is fine with the default initialization of zero, for example, given that both instances of cmpxchg() specify the value 1 as the old value (and thus both instances will fail). The "locations" clause says to print the final values of P0()'s and P1()'s local variables r1, where the leading "0:" selects P0() and the leading "1:" selects P1(). And yes, the first process must be named "P0" and subsequent processes' names must consist of "P" followed by a consecutive number, so P0(), P1(), P2(), P3(), ... The "exists" clause gives an expression to be tested "at the end of time". Again, the "0:" and "1:" select P0() and P1(), respectively. The "=" tests for equality. The "/\" is boolean AND. Boolean OR would be "\/" and boolean NOT "~". Parentheses may be used. A variable name on the left-hand side of a relational operator denotes the value of that variable, but on the right-hand side its address. The value of any variable mentioned in the "exists" clause is printed as part of the "States" list shown below. Running this with the herd7 tool does the moral equivalent of a full state-space search, and gets the following: ------------------------------------------------------------------------ $ herd7 -conf linux-kernel.cfg /tmp/cmpxchg.litmus Test cmpxchg-fail-order Allowed States 4 0:r0=0; 0:r1=0; 1:r0=0; 1:r1=0; 0:r0=0; 0:r1=0; 1:r0=1; 1:r1=0; 0:r0=1; 0:r1=0; 1:r0=0; 1:r1=0; 0:r0=1; 0:r1=0; 1:r0=1; 1:r1=0; Ok Witnesses Positive: 1 Negative: 3 Condition exists (0:r0=0 /\ 1:r0=0) Observation cmpxchg-fail-order Sometimes 1 3 Time cmpxchg-fail-order 0.00 Hash=564afea251867c6127350213c7eb388d ------------------------------------------------------------------------ Here, there are four possible combinations of final values for the two r0 local variables, and all of them can happen, including the combination where both are zero, which is the combination called out by the "exists" clause. The "Sometimes" says that the expression in the "exists" clause is sometimes true (other options being "Always" and "Never"). The fact that both instances of r0 can be zero shows that failing cmpxchg() really does not provide ordering. And again, ordering for atomic_try_cmpxchg() is the same as that for cmpxchg(). Thanx, Paul > Therefore you not only need to turn group->migr_state read into > an atomic_read_acquire() but you also need to do this on each iteration > of this loop. For example you can move the read_acquire right here. > > Thanks. > > > + if (child) > > + childstate.state = atomic_read(&child->migr_state); > > + > > + newstate = curstate; > > + walk_done = true; > > + > > + /* Reset active bit when the child is no longer active */ > > + if (!childstate.active) > > + newstate.active &= ~childmask; > > + > > + if (newstate.migrator == childmask) { > > + /* > > + * Find a new migrator for the group, because the child > > + * group is idle! > > + */ > > + if (!childstate.active) { > > + unsigned long new_migr_bit, active = newstate.active; > > + > > + new_migr_bit = find_first_bit(&active, BIT_CNT); > > + > > + if (new_migr_bit != BIT_CNT) { > > + newstate.migrator = BIT(new_migr_bit); > > + } else { > > + newstate.migrator = TMIGR_NONE; > > + > > + /* Changes need to be propagated */ > > + walk_done = false; > > + } > > + } > > + } > > + > > + newstate.seq++; > > + > > + WARN_ON_ONCE((newstate.migrator != TMIGR_NONE) && !(newstate.active)); > > + > > + } while (!atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state));
Le Mon, Jan 29, 2024 at 11:50:39AM +0100, Anna-Maria Behnsen a écrit : > Frederic Weisbecker <frederic@kernel.org> writes: > > > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > >> +static bool tmigr_inactive_up(struct tmigr_group *group, > >> + struct tmigr_group *child, > >> + void *ptr) > >> +{ > >> + union tmigr_state curstate, newstate, childstate; > >> + struct tmigr_walk *data = ptr; > >> + bool walk_done; > >> + u8 childmask; > >> + > >> + childmask = data->childmask; > >> + curstate.state = atomic_read(&group->migr_state); > >> + childstate.state = 0; > >> + > >> + do { > > > > So I got the confirmation from Boqun (+Cc) and Paul that a failing cmpxchg > > may not order the load of the old value against subsequent loads. And > > that may apply to atomic_try_cmpxchg() as well. > > > > Therefore you not only need to turn group->migr_state read into > > an atomic_read_acquire() but you also need to do this on each iteration > > of this loop. For example you can move the read_acquire right here. > > I tried to read and understand more about the memory barriers especially > the acquire/release stuff. So please correct me whenever I'm wrong. > > We have to make sure that the child/group state values contain the last > updates and prevent reordering to be able to rely on those values. > > So I understand, that we need the atomic_read_acquire() here for the > child state, because we change the group state accordingly and need to > make sure, that it contains the last update of it. The cmpxchg which > writes the child state is (on success) a full memory barrier. And the > atomic_read_acquire() makes sure all preceding "critical sections" > (which ends with the full memory barrier) are visible. Is this right? Right. And BTW I'm being suggested by Paul to actually avoid atomic_read_acquire() after cmpxchg() failure because that implies an error prone re-read. So pick up your favourite between smp_rmb() or smp_mb__after_atomic(). With the latter this could look like: curstate.state = atomic_read_acquire(&group->migr_state); for (;;) { childstate.state = atomic_read(&child->migr_state); ... if (atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state)) break; smp_mb__after_atomic(); } > > To make sure the proper states are used, atomic_read_acquire() is then > also required in: > - tmigr_check_migrator() > - tmigr_check_migrator_and_lonely() > - tmigr_check_lonely() Not sure about those. I'll check them. > - tmigr_new_timer_up() (for childstate and groupstate) Actually you need to fix some ordering there that I suggested a while ago :) See https://lore.kernel.org/all/ZIhKT3h7Dc0G3xoU@lothringen/ > - tmigr_connect_child_parent() > Right? > > Regarding the pairing of acquire: What happens when two > atomic_read_acquire() are executed afterwards without pairing 1:1 with a > release or stronger memory barrier? I think I'll need an example. > > Now I want to understand the case for the group state here and also in > active_up path. When reading it without acquire, it is possible, that > not all changes are visible due to reordering,... . But then the worst > outcome would be that the cmpxchg fails and the loop has to be done once > more? Is this right? Right. This one looks good as it doesn't depend on the child's value. > > I know that memory barriers are not for free and redo the loop is also > not for free. But I don't know which of both is worse. At least in > inactive_up() path, we are not in the critical path. In active_up() it > would be good to take the less expensive option. I don't think you need to change the active_up(), from a quick glance. > > I want to understand the atomic_try_cmpxchg_acquire() variant: The Read > is an acquire, so even if the compare/write fails, the value which is > handed back is the one which was update last with a succesful cmpxchg > and then we can rely on this value? So cmpxchg_acquire() provides a weaker ordering than cmpxchg(). Instead of issuing a full memory barrier, it issues an acquire barrier, which is really not what you want since you actually want to order what precedes the cmpxchg() with the write that it performs. At the very least you would actually need cmpxchg_release(). And most importantly, neither cmpxchg(), cmpxchg_release() nor cmpxchg_acquire() guarantee any ordering on failure. Thanks.
Frederic Weisbecker <frederic@kernel.org> writes: > Le Mon, Jan 29, 2024 at 11:50:39AM +0100, Anna-Maria Behnsen a écrit : >> Frederic Weisbecker <frederic@kernel.org> writes: >> >> > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : >> >> +static bool tmigr_inactive_up(struct tmigr_group *group, >> >> + struct tmigr_group *child, >> >> + void *ptr) >> >> +{ >> >> + union tmigr_state curstate, newstate, childstate; >> >> + struct tmigr_walk *data = ptr; >> >> + bool walk_done; >> >> + u8 childmask; >> >> + >> >> + childmask = data->childmask; >> >> + curstate.state = atomic_read(&group->migr_state); >> >> + childstate.state = 0; >> >> + >> >> + do { >> > >> > So I got the confirmation from Boqun (+Cc) and Paul that a failing cmpxchg >> > may not order the load of the old value against subsequent loads. And >> > that may apply to atomic_try_cmpxchg() as well. >> > >> > Therefore you not only need to turn group->migr_state read into >> > an atomic_read_acquire() but you also need to do this on each iteration >> > of this loop. For example you can move the read_acquire right here. >> >> I tried to read and understand more about the memory barriers especially >> the acquire/release stuff. So please correct me whenever I'm wrong. >> >> We have to make sure that the child/group state values contain the last >> updates and prevent reordering to be able to rely on those values. >> >> So I understand, that we need the atomic_read_acquire() here for the >> child state, because we change the group state accordingly and need to >> make sure, that it contains the last update of it. The cmpxchg which >> writes the child state is (on success) a full memory barrier. And the >> atomic_read_acquire() makes sure all preceding "critical sections" >> (which ends with the full memory barrier) are visible. Is this right? > > Right. And BTW I'm being suggested by Paul to actually avoid > atomic_read_acquire() after cmpxchg() failure because that implies an > error prone re-read. So pick up your favourite between smp_rmb() or > smp_mb__after_atomic(). > > With the latter this could look like: > > curstate.state = atomic_read_acquire(&group->migr_state); > for (;;) { > childstate.state = atomic_read(&child->migr_state); > ... > if (atomic_try_cmpxchg(&group->migr_state, &curstate.state, newstate.state)) > break; > smp_mb__after_atomic(); > } I'll take this. >> >> To make sure the proper states are used, atomic_read_acquire() is then >> also required in: >> - tmigr_check_migrator() >> - tmigr_check_migrator_and_lonely() >> - tmigr_check_lonely() > > Not sure about those. I'll check them. Please ignore - I was on the wrong track and John helped me out (at least another step) of my "memory barrier understanding mess". So I have no more questions regarding the memory barriers here - at least at the moment. >> - tmigr_new_timer_up() (for childstate and groupstate) > > Actually you need to fix some ordering there that I suggested a while ago :) > See https://lore.kernel.org/all/ZIhKT3h7Dc0G3xoU@lothringen/ I quickly opened it and it seems that I completely missed it somehow. I'm sorry! Thanks for your patience! Anna-Maria
Le Sun, Jan 28, 2024 at 04:58:55PM +0100, Anna-Maria Behnsen a écrit : > Frederic Weisbecker <frederic@kernel.org> writes: > >> + > >> + if (tmigr_check_lonely(topgroup)) > >> + return READ_ONCE(topgroup->next_expiry); > > When I hand in tevt->global as a parameter, I'll need to compare the > first expiry of the toplevel group and the tevt->global value and return > the earlier expiry. Only a single child is active in top level, so it > might be that this CPU is the last active CPU in hierarchy. > > I didn't check all the way to the top whether all groups are > 'lonely'. So when the top level group has only a single active child, it > is also possible that the child of the top level group has two active > children... Then a return of KTIME_MAX would be also a more precise > forecast. > > This quick check is there to keep the overhead minimal when checking > whether it might be possible to go idle. So I don't know, if we should > add this additional check per level (which is pretty simple when using > group->parent for walking the hierarchy). What do you think? Not sure. Maybe if the tree never exceeds 3 levels (does it?) it's ok to do the walk? Thanks.
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +/* > + * Returns true, if there is nothing to be propagated to the next level > + * > + * @data->firstexp is set to expiry of first gobal event of the (top level of > + * the) hierarchy, but only when hierarchy is completely idle. > + * > + * This is the only place where the group event expiry value is set. > + */ > +static > +bool tmigr_update_events(struct tmigr_group *group, struct tmigr_group *child, > + struct tmigr_walk *data, union tmigr_state childstate, > + union tmigr_state groupstate) > +{ > + struct tmigr_event *evt, *first_childevt; > + bool walk_done, remote = data->remote; > + bool leftmost_change = false; > + u64 nextexp; > + > + if (child) { > + raw_spin_lock(&child->lock); > + raw_spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING); > + > + if (childstate.active) { Since you're going to do the atomic_read(&group->migr_state) within the group->lock, you may as well do the atomic_read(&child->migr_state) within the child->lock. It won't hurt and simplifies the picture in the mind. Then you can add the following comment to outline the ordering expectations: /* * Observing child->migr_state.active means that: * * 1) Either the child is effectively active, then it's fine to stop here * * 2) Or we are racing with a CPU going inactive and this childstate is actually * not active anymore but tmigr_inactive_up() hasn't yet called tmigr_update_event() * on it. It's fine to stop here because that pending call will take care * of the rest of the propagation. * * 3) In any case it's impossible to observe childstate.active when a racing * CPU made it inactive and also called tmigr_update_event() on it. The * group->lock enforces ordering such that ->migr_state changes * in tmigr_inactive_up() are released by group->lock UNLOCK on the * subsequent call to tmigr_update_event() and then acquired by * child->lock LOCK in tmigr_new_timer() -> tmigr_update_event(). */ > + walk_done = true; > + goto unlock; > + } > + > + first_childevt = tmigr_next_groupevt(child); > + nextexp = child->next_expiry; > + evt = &child->groupevt; > + } else { > + nextexp = data->nextexp; > + > + first_childevt = evt = data->evt; > + > + /* > + * Walking the hierarchy is required in any case when a > + * remote expiry was done before. This ensures to not lose > + * already queued events in non active groups (see section > + * "Required event and timerqueue update after a remote > + * expiry" in the documentation at the top). > + * > + * The two call sites which are executed without a remote expiry > + * before, are not prevented from propagating changes through > + * the hierarchy by the return: > + * - When entering this path by tmigr_new_timer(), @evt->ignore > + * is never set. > + * - tmigr_inactive_up() takes care of the propagation by > + * itself and ignores the return value. But an immediate > + * return is required because nothing has to be done in this > + * level as the event could be ignored. > + */ > + if (evt->ignore && !remote) > + return true; > + > + raw_spin_lock(&group->lock); > + } > + > + if (nextexp == KTIME_MAX) { > + evt->ignore = true; > + > + /* > + * When the next child event could be ignored (nextexp is > + * KTIME_MAX) and there was no remote timer handling before or > + * the group is already active, there is no need to walk the > + * hierarchy even if there is a parent group. > + * > + * The other way round: even if the event could be ignored, but > + * if a remote timer handling was executed before and the group > + * is not active, walking the hierarchy is required to not miss > + * an enqueued timer in the non active group. The enqueued timer > + * of the group needs to be propagated to a higher level to > + * ensure it is handled. > + */ > + if (!remote || groupstate.active) { Same here, fetching group->migr_state.active from within the lock simplifies the mind mapping. Thanks. > + walk_done = true; > + goto unlock; > + } > + } else { > + /* > + * An update of @evt->cpu and @evt->ignore flag is required only > + * when @child is set (the child is equal or higher than lvl0), > + * but it doesn't matter if it is written once more to the per > + * CPU event; make the update unconditional. > + */ > + evt->cpu = first_childevt->cpu; > + evt->ignore = false; > + } > + > + walk_done = !group->parent; > + > + /* > + * If the child event is already queued in the group, remove it from the > + * queue when the expiry time changed only. > + */ > + if (timerqueue_node_queued(&evt->nextevt)) { > + if (evt->nextevt.expires == nextexp) > + goto check_toplvl; > + > + leftmost_change = timerqueue_getnext(&group->events) == &evt->nextevt; > + if (!timerqueue_del(&group->events, &evt->nextevt)) > + WRITE_ONCE(group->next_expiry, KTIME_MAX); > + } > + > + evt->nextevt.expires = nextexp; > + > + if (timerqueue_add(&group->events, &evt->nextevt)) { > + leftmost_change = true; > + WRITE_ONCE(group->next_expiry, nextexp); > + } > + > +check_toplvl: > + if (walk_done && (groupstate.migrator == TMIGR_NONE)) { > + /* > + * Nothing to do when first event didn't changed and update was > + * done during remote timer handling. > + */ > + if (remote && !leftmost_change) > + goto unlock; > + /* > + * The top level group is idle and it has to be ensured the > + * global timers are handled in time. (This could be optimized > + * by keeping track of the last global scheduled event and only > + * arming it on the CPU if the new event is earlier. Not sure if > + * its worth the complexity.) > + */ > + data->firstexp = tmigr_next_groupevt_expires(group); > + } > + > +unlock: > + raw_spin_unlock(&group->lock); > + > + if (child) > + raw_spin_unlock(&child->lock); > + > + return walk_done; > +}
Le Tue, Jan 30, 2024 at 06:56:32PM +0100, Anna-Maria Behnsen a écrit : > Frederic Weisbecker <frederic@kernel.org> writes: > > CPU 1 went idle, CPU 3 will take care of CPU 1's timer. Then come two > > things happening at the same time: CPU 0 has a timer interrupt, due to > > RCU callbacks handling for example, and CPU 3 goes offline: > > > > CPU 0 CPU 3 > > ----- ----- > > // On top level [GRP1:0], just set migrator = TMIGR_NONE > > tmigr_inactive_up() { > > cpu = cpumask_any_but(cpu_online_mask, cpu); > > //cpu == 0 > > tmc_resched = per_cpu_ptr(&tmigr_cpu, CPU 0); > > raw_spin_lock(&tmc_resched->lock); > > tmc_resched->wakeup_recalc = true; > > raw_spin_unlock(&tmc_resched->lock); > > // timer interrupt > > run_local_timers() { > > tmigr_requires_handle_remote() { > > data.firstexp = KTIME_MAX; > > // CPU 0 sees the tmc_resched->wakeup_recalc > > // latest update > > if (tmc->wakeup_recalc) { > > tmigr_requires_handle_remote_up() { > > // CPU 0 doesn't see GRP0:0 > > // latest update from CPU 1, > > // because it has no locking > > // and does a racy check. > > if (!tmigr_check_migrator(group, childmask)) > > return true; > > } > > raw_spin_lock(&tmc->lock); > > WRITE_ONCE(tmc->wakeup, data.firstexp); > > tmc->wakeup_recalc = false; > > raw_spin_unlock(&tmc->lock) > > return 0; > > } > > // IPI is sent only now > > smp_send_reschedule(cpu); > > } > > > > > > There is nothing that prevents CPU 0 from not seeing the hierarchy updates from > > other CPUs since it checks the migrators in a racy way. As a result the timer of > > CPU 1 may be ignored by CPU 0. > > > > You'd need to lock the tmc while calling tmigr_requires_handle_remote_up(), so > > that CPU 0 "inherits" everything that CPU 3 has seen, and that includes changes > > from CPU 1. > > > > puhh. ok. But for the !idle case the lockless walk of > tmigr_requires_handle_remote_up() is ok? Looks ok to me. It's racy but if the !idle migrator doesn't notice in the current tick, it will in the next one. > It's also possible, that the > CPU misses an update of the state - another CPU goes idle and selects > this CPU as the new migrator. And this CPU reads a stale value where the > other CPU is migrator. But this will be revisited on the next > tick. hmm... Exactly, and I'm not worried. There has to be strong ordering with atomics or locking in the idle case because the CPU goes to sleep and it must make sure not to miss a timer. But in the !idle case the check is periodic, so you don't need any of that. We can live with an unnoticed timer for a tick or two. > > > > > But I see that tmigr_cpu_new_timer() does it right. Wouldn't it be possible to > > exlusively let tmigr_cpu_new_timer() handle the wakeup_recalc thing? This is > > going to be called after the end of the IRQ (whether timer interrupt or sched > > IPI) in any case. > > Should work, yes. But when a timer has to be handled right away and it > is checked after the end of the IRQ, then the tick might be reprogrammed > so that CPU comes out of idle, or am I wrong? If there is a pending timer, it can wait a tick. That's what happens if we wait for tmigr_cpu_new_timer() to handle it. But you know what, let's make it more simple. CPU down hotplug is not a fast path and it doesn't deserve so many optimizations. Just remove ->wakeup_recalc entirely and if the offlining CPU detects it's the last active CPU in the hierarchy, just queue an empty work to the first online CPU. It will briefly force that CPU out of idle and trigger an activate. Then either the CPU periodically checks remote timers or it will go back idle and notice. Something like this (untested): diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c index de1905b0bae7..0f15215ef257 100644 --- a/kernel/time/timer_migration.c +++ b/kernel/time/timer_migration.c @@ -548,7 +548,6 @@ static void __tmigr_cpu_activate(struct tmigr_cpu *tmc) tmc->cpuevt.ignore = true; WRITE_ONCE(tmc->wakeup, KTIME_MAX); - tmc->wakeup_recalc = false; walk_groups(&tmigr_active_up, &data, tmc); } @@ -1041,41 +1040,11 @@ int tmigr_requires_handle_remote(void) } /* - * If the CPU is idle, check whether the recalculation of @tmc->wakeup - * is required. @tmc->wakeup_recalc is set, when the last active CPU - * went offline. The last active CPU delegated the handling of the timer - * migration hierarchy to another (this) CPU by updating this flag and - * sending a reschedule. - * - * Racy lockless check is valid: - * - @tmc->wakeup_recalc is set by the remote CPU before it issues - * reschedule IPI. - * - As interrupts are disabled here this CPU will either observe - * @tmc->wakeup_recalc set before the reschedule IPI can be handled or - * it will observe it when this function is called again on return - * from handling the reschedule IPI. - */ - if (tmc->wakeup_recalc) { - __walk_groups(&tmigr_requires_handle_remote_up, &data, tmc); - - if (data.firstexp != KTIME_MAX) - ret = 1; - - raw_spin_lock(&tmc->lock); - WRITE_ONCE(tmc->wakeup, data.firstexp); - tmc->wakeup_recalc = false; - raw_spin_unlock(&tmc->lock); - - return ret; - } - - /* - * When the CPU is idle and @tmc->wakeup is reliable as - * @tmc->wakeup_recalc is not set, compare it with @data.now. The lock - * is required on 32bit architectures to read the variable consistently - * with a concurrent writer. On 64bit the lock is not required because - * the read operation is not split and so it is always consistent. - + * When the CPU is idle and @tmc->wakeup is reliable, compare it with + * @data.now. The lock is required on 32bit architectures to read the + * variable consistently with a concurrent writer. On 64bit the lock + * is not required because the read operation is not split and so it is + * always consistent. */ if (IS_ENABLED(CONFIG_64BIT)) { if (data.now >= READ_ONCE(tmc->wakeup)) @@ -1119,21 +1088,7 @@ u64 tmigr_cpu_new_timer(u64 nextexp) tmc->cpuevt.ignore) { ret = tmigr_new_timer(tmc, nextexp); } - } else if (tmc->wakeup_recalc) { - struct tmigr_remote_data data; - - data.now = KTIME_MAX; - data.childmask = tmc->childmask; - data.firstexp = KTIME_MAX; - data.tmc_active = false; - data.check = false; - - __walk_groups(&tmigr_requires_handle_remote_up, &data, tmc); - - ret = data.firstexp; } - tmc->wakeup_recalc = false; - /* * Make sure the reevaluation of timers in idle path will not miss an * event. @@ -1212,36 +1167,7 @@ static bool tmigr_inactive_up(struct tmigr_group *group, * hierarchy) and * - there is a pending event in the hierarchy */ - if (data->firstexp != KTIME_MAX) { - WARN_ON_ONCE(group->parent); - /* - * Top level path: If this CPU is about going offline and was - * the last active CPU, wake up some random other CPU so it will - * take over the migrator duty and program its timer - * properly. Ideally wake the CPU with the closest expiry time, - * but that's overkill to figure out. - * - * Set wakeup_recalc of remote CPU, to make sure the complete - * idle hierarchy with enqueued timers is reevaluated. - */ - if (!(this_cpu_ptr(&tmigr_cpu)->online)) { - struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); - unsigned int cpu = smp_processor_id(); - struct tmigr_cpu *tmc_resched; - - cpu = cpumask_any_but(cpu_online_mask, cpu); - tmc_resched = per_cpu_ptr(&tmigr_cpu, cpu); - - raw_spin_unlock(&tmc->lock); - - raw_spin_lock(&tmc_resched->lock); - tmc_resched->wakeup_recalc = true; - raw_spin_unlock(&tmc_resched->lock); - - raw_spin_lock(&tmc->lock); - smp_send_reschedule(cpu); - } - } + WARN_ON_ONCE(data->firstexp != KTIME_MAX && group->parent); return walk_done; } @@ -1579,9 +1505,20 @@ static int tmigr_cpu_online(unsigned int cpu) return 0; } +long tmigr_trigger_active(void *unused) +{ + struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); + + WARN_ON_ONCE(!tmc->online || tmc->idle); + + return 0; +} + static int tmigr_cpu_offline(unsigned int cpu) { struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); + int migrator; + u64 firstexp; raw_spin_lock_irq(&tmc->lock); tmc->online = false; @@ -1591,9 +1528,14 @@ static int tmigr_cpu_offline(unsigned int cpu) * CPU has to handle the local events on his own, when on the way to * offline; Therefore nextevt value is set to KTIME_MAX */ - __tmigr_cpu_deactivate(tmc, KTIME_MAX); + firstexp = __tmigr_cpu_deactivate(tmc, KTIME_MAX); raw_spin_unlock_irq(&tmc->lock); + if (firstexp != KTIME_MAX) { + migrator = cpumask_any_but(cpu_online_mask, cpu); + work_on_cpu(migrator, tmigr_trigger_active, NULL); + } + return 0; } diff --git a/kernel/time/timer_migration.h b/kernel/time/timer_migration.h index c32947cf429b..c556d5824792 100644 --- a/kernel/time/timer_migration.h +++ b/kernel/time/timer_migration.h @@ -78,18 +78,12 @@ struct tmigr_group { * @idle: Indicates whether the CPU is idle in the timer migration * hierarchy * @remote: Is set when timers of the CPU are expired remotely - * @wakeup_recalc: Indicates, whether a recalculation of the @wakeup value - * is required. @wakeup_recalc is only used by this CPU - * when it is marked idle in the timer migration - * hierarchy. It is set by a remote CPU which was the last - * active CPU and is on the way to idle. * @tmgroup: Pointer to the parent group * @childmask: childmask of tmigr_cpu in the parent group * @wakeup: Stores the first timer when the timer migration * hierarchy is completely idle and remote expiry was done; * is returned to timer code in the idle path and is only - * used in idle path; it is only valid, when @wakeup_recalc - * is not set. + * used in idle path. * @cpuevt: CPU event which could be enqueued into the parent group */ struct tmigr_cpu { @@ -97,7 +91,6 @@ struct tmigr_cpu { bool online; bool idle; bool remote; - bool wakeup_recalc; struct tmigr_group *tmgroup; u8 childmask; u64 wakeup;
On 15/01/2024 14:37, Anna-Maria Behnsen wrote: > Hi, > > the cleanup patches are already applied and so the contains only two parts: > > - Patches 1 - 4: timer base idle marking rework with two preparatory > changes. See the section below for more details. > > - Patches 5 - 20: Updated timer pull model on top of timer idle rework > > > The queue is available here: > > git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel timers/pushpull > > > Move marking timer bases as idle into tick_nohz_stop_tick() > ----------------------------------------------------------- > > The idle marking of timer bases is done in get_next_timer_interrupt() > whenever possible. The timer bases are idle, even if the tick will not be > stopped. This lead to an IPI when a new first timer is enqueued remote. To > prevent this, setting timer_base->in_idle flag is postponed to > tick_nohz_stop_tick(). > > Furthermore this synchronizes the states of timer base is_idle and > tick_stopped. With the timer pull model in place, also the idle state in > the hierarchy of a CPU is synchronized with the other idle related states. > > > Timer pull model > ---------------- > > Placing timers at enqueue time on a target CPU based on dubious heuristics > does not make any sense: > > 1) Most timer wheel timers are canceled or rearmed before they expire. > > 2) The heuristics to predict which CPU will be busy when the timer expires > are wrong by definition. > > So placing the timers at enqueue wastes precious cycles. > > The proper solution to this problem is to always queue the timers on the > local CPU and allow the non pinned timers to be pulled onto a busy CPU at > expiry time. > > Therefore split the timer storage into local pinned and global timers: > Local pinned timers are always expired on the CPU on which they have been > queued. Global timers can be expired on any CPU. > > As long as a CPU is busy it expires both local and global timers. When a > CPU goes idle it arms for the first expiring local timer. If the first > expiring pinned (local) timer is before the first expiring movable timer, > then no action is required because the CPU will wake up before the first > movable timer expires. If the first expiring movable timer is before the > first expiring pinned (local) timer, then this timer is queued into a idle > timerqueue and eventually expired by some other active CPU. > > To avoid global locking the timerqueues are implemented as a hierarchy. The > lowest level of the hierarchy holds the CPUs. The CPUs are associated to > groups of 8, which are separated per node. If more than one CPU group > exist, then a second level in the hierarchy collects the groups. Depending > on the size of the system more than 2 levels are required. Each group has a > "migrator" which checks the timerqueue during the tick for remote timers to > be expired. > > If the last CPU in a group goes idle it reports the first expiring event in > the group up to the next group(s) in the hierarchy. If the last CPU goes > idle it arms its timer for the first system wide expiring timer to ensure > that no timer event is missed. > > > Testing > ~~~~~~~ > > Enqueue > ^^^^^^^ > > The impact of wasting cycles during enqueue by using the heuristic in > contrast to always queuing the timer on the local CPU was measured with a > micro benchmark. Therefore a timer is enqueued and dequeued in a loop with > 1000 repetitions on a isolated CPU. The time the loop takes is measured. A > quarter of the remaining CPUs was kept busy. This measurement was repeated > several times. With the patch queue the average duration was reduced by > approximately 25%. > > 145ns plain v6 > 109ns v6 with patch queue > > > Furthermore the impact of residence in deep idle states of an idle system > was investigated. The patch queue doesn't downgrade this behavior. > > dbench test > ^^^^^^^^^^^ > > A dbench test starting X pairs of client servers are used to create load on > the system. The measurable value is the throughput. The tests were executed > on a zen3 machine. The base is the tip tree branch timers/core which is > based on a v6.6-rc1. > > governor menu > > NR timers/core pull-model impact > ---------------------------------------------- > 1 353.19 (0.19) 353.45 (0.30) 0.07% > 2 700.10 (0.96) 687.00 (0.20) -1.87% > 4 1329.37 (0.63) 1282.91 (0.64) -3.49% > 8 2561.16 (1.28) 2493.56 (1.76) -2.64% > 16 4959.96 (0.80) 4914.59 (0.64) -0.91% > 32 9741.92 (3.44) 8979.83 (1.13) -7.82% > 64 16535.40 (2.84) 16388.47 (4.02) -0.89% > 128 22136.83 (2.42) 23174.50 (1.43) 4.69% > 256 39256.77 (4.48) 38994.00 (0.39) -0.67% > 512 36799.03 (1.83) 38091.10 (0.63) 3.51% > 1024 32903.03 (0.86) 35370.70 (0.89) 7.50% > > > governor teo > > NR timers/core pull-model impact > ---------------------------------------------- > 1 350.83 (1.27) 352.45 (0.96) 0.46% > 2 699.52 (0.85) 690.10 (0.54) -1.35% > 4 1339.53 (1.99) 1294.71 (2.71) -3.35% > 8 2574.10 (0.76) 2495.46 (1.97) -3.06% > 16 4898.50 (1.74) 4783.06 (1.64) -2.36% > 32 9115.50 (4.63) 9037.83 (1.58) -0.85% > 64 16663.90 (3.80) 16042.00 (1.72) -3.73% > 128 25044.93 (1.11) 23250.03 (1.08) -7.17% > 256 38059.53 (1.70) 39658.57 (2.98) 4.20% > 512 36369.30 (0.39) 38890.13 (0.36) 6.93% > 1024 33956.83 (1.14) 35514.83 (0.29) 4.59% > > > > Ping Pong Oberservation > ^^^^^^^^^^^^^^^^^^^^^^^ > > During testing on a mostly idle machine a ping pong game could be observed: > a process_timeout timer is expired remotely on a non idle CPU. Then the CPU > where the schedule_timeout() was executed to enqueue the timer comes out of > idle and restarts the timer using schedule_timeout() and goes back to idle > again. This is due to the fair scheduler which tries to keep the task on > the CPU which it previously executed on. > > > > > Possible Next Steps > ~~~~~~~~~~~~~~~~~~~ > > Simple deferrable timers are no longer required as they can be converted to > global timers. If a CPU goes idle, a formerly deferrable timer will not > prevent the CPU to sleep as long as possible. Only the last migrator CPU > has to take care of them. Deferrable timers with timer pinned flags needs > to be expired on the specified CPU but must not prevent CPU from going > idle. They require their own timer base which is never taken into account > when calculating the next expiry time. This conversation and required > cleanup will be done in a follow up series. > > > v9..v10: https://lore.kernel.org/r/20231201092654.34614-1-anna-maria@linutronix.de/ > - Address review Feedback of Bigeasy > > > v8..v9: https://lore.kernel.org/r/20231004123454.15691-1-anna-maria@linutronix.de > - Address review feedback > - Add more minor cleanup fixes > - fixes inconsistent idle related states > > > v7..v8: https://lore.kernel.org/r/20230524070629.6377-1-anna-maria@linutronix.de > - Address review feedback > - Move marking timer base idle into tick_nohz_stop_tick() > - Look ahead function to determine possible sleep lenght > > > v6..v7: > - Address review feedback of Frederic and bigeasy > - Change lock, unlock fetch next timer interrupt logic after remote expiry > - Move timer_expire_remote() into tick-internal.h > - Add documentation section about "Required event and timerqueue update > after remote expiry" > - Fix fallout of kernel test robot > > > v5..v6: > > - Address review of Frederic Weisbecker and Peter Zijlstra (spelling, > locking, race in tmigr_handle_remote_cpu()) > > - unconditionally set TIMER_PINNED flag in add_timer_on(); introduce > add_timer() variants which set/unset TIMER_PINNED flag; drop fixing > add_timer_on() call sites, as TIMER_PINNED flag is set implicitly; > Fixing workqueue to use add_timer_global() instead of simply > add_timer() for unbound work. > > - Drop support for siblings to end up in the same level 0 group (could be > added again in a better way as an improvement later on) > > - Do not send IPI for new first deferrable timers > > v4..v5: > - address review feedback of Frederic Weisbecker > - fix issue with group timer update after remote expiry > > v3..v4: > - address review feedback of Frederic Weisbecker > - address kernel test robot fallout > - Move patch 16 "add_timer_on(): Make sure callers have TIMER_PINNED > flag" at the begin of the queue to prevent timers to end up in global > timer base when they were queued using add_timer_on() > - Fix some comments and typos > > v2..v3: https://lore.kernel.org/r/20170418111102.490432548@linutronix.de/ > - Minimize usage of locks by storing data using atomic_cmpxchg() for > migrator information and information about active cpus. > > > Thanks, > > Anna-Maria > > > > > Anna-Maria Behnsen (18): > timers: Restructure get_next_timer_interrupt() > timers: Split out get next timer interrupt > timers: Move marking timer bases idle into tick_nohz_stop_tick() > timers: Optimization for timer_base_try_to_set_idle() > timers: Introduce add_timer() variants which modify timer flags > workqueue: Use global variant for add_timer() > timers: add_timer_on(): Make sure TIMER_PINNED flag is set > timers: Ease code in run_local_timers() > timers: Split next timer interrupt logic > timers: Keep the pinned timers separate from the others > timers: Retrieve next expiry of pinned/non-pinned timers separately > timers: Split out "get next timer interrupt" functionality > timers: Add get next timer interrupt functionality for remote CPUs > timers: Check if timers base is handled already > timers: Introduce function to check timer base is_idle flag > timers: Implement the hierarchical pull model > timer_migration: Add tracepoints > timers: Always queue timers on the local CPU > > Richard Cochran (linutronix GmbH) (2): > timers: Restructure internal locking > tick/sched: Split out jiffies update helper function > > MAINTAINERS | 1 + > include/linux/cpuhotplug.h | 1 + > include/linux/timer.h | 16 +- > include/trace/events/timer_migration.h | 297 +++++ > kernel/time/Makefile | 3 + > kernel/time/tick-internal.h | 14 + > kernel/time/tick-sched.c | 65 +- > kernel/time/timer.c | 505 +++++-- > kernel/time/timer_migration.c | 1693 ++++++++++++++++++++++++ > kernel/time/timer_migration.h | 147 ++ > kernel/workqueue.c | 2 +- > 11 files changed, 2629 insertions(+), 115 deletions(-) > create mode 100644 include/trace/events/timer_migration.h > create mode 100644 kernel/time/timer_migration.c > create mode 100644 kernel/time/timer_migration.h Hi Anna-Maria, I did some quick measurements on a pixel6 Android 14 with 6.6 kernel baseline. The workload is 5 iterations of uibenchjanktests (~70 Min runtime total). Backport of timers/pushpull up to: 6b7e23d1f495 ("timers: Always queue timers on the local CPU"). Power: +------------+--------+------------+-------+-----------+ | channel | metric | tag | value | perc_diff | +------------+--------+------------+-------+-----------+ | CPU | gmean | mainline_5 | 196.6 | 0.0% | | CPU-Big | gmean | mainline_5 | 65.3 | 0.0% | | CPU-Little | gmean | mainline_5 | 99.6 | 0.0% | | CPU-Mid | gmean | mainline_5 | 31.6 | 0.0% | | GPU | gmean | mainline_5 | 36.7 | 0.0% | | Total | gmean | mainline_5 | 233.3 | 0.0% | | CPU | gmean | pushpull_5 | 195.9 | -0.35% | | CPU-Big | gmean | pushpull_5 | 64.8 | -0.85% | | CPU-Little | gmean | pushpull_5 | 98.5 | -1.12% | | CPU-Mid | gmean | pushpull_5 | 32.6 | 3.13% | | GPU | gmean | pushpull_5 | 36.8 | 0.19% | | Total | gmean | pushpull_5 | 232.6 | -0.26% | +------------+--------+------------+-------+-----------+ (Slightly skewed in favor of mainline because of starting temperature.) Idle residency: +------------+---------+------------+--------+ | tag | cluster | idle_state | time | +------------+---------+------------+--------+ | mainline_5 | little | -1.0 | 518.42 | | mainline_5 | little | 0.0 | 238.28 | | mainline_5 | little | 1.0 | 19.7 | | mainline_5 | mid | -1.0 | 201.0 | | mainline_5 | mid | 0.0 | 335.26 | | mainline_5 | mid | 1.0 | 240.15 | | mainline_5 | big | -1.0 | 173.86 | | mainline_5 | big | 0.0 | 330.93 | | mainline_5 | big | 1.0 | 271.61 | | pushpull_5 | little | -1.0 | 526.45 | | pushpull_5 | little | 0.0 | 257.77 | | pushpull_5 | little | 1.0 | 5.18 | | pushpull_5 | mid | -1.0 | 220.98 | | pushpull_5 | mid | 0.0 | 347.43 | | pushpull_5 | mid | 1.0 | 220.98 | | pushpull_5 | big | -1.0 | 177.36 | | pushpull_5 | big | 0.0 | 331.61 | | pushpull_5 | big | 1.0 | 280.42 | +------------+---------+------------+--------+ We can see the improvement we were hoping for: Longer idle times on the big cores. For completeness here are the idle misses: +------------+-------+--------------------+ | tag | type | count_perc | +------------+-------+--------------------+ | mainline_5 | False | 3.4829999999999997 | | mainline_5 | True | 15.639000000000001 | | pushpull_5 | False | 3.487 | | pushpull_5 | True | 15.881 | +------------+-------+--------------------+ If there is anything else you would like to see some data on, please let me know. Kind Regards, Christian
Hi, Christian Loehle <christian.loehle@arm.com> writes: > On 15/01/2024 14:37, Anna-Maria Behnsen wrote: >> Hi, >> >> the cleanup patches are already applied and so the contains only two parts: >> >> - Patches 1 - 4: timer base idle marking rework with two preparatory >> changes. See the section below for more details. >> >> - Patches 5 - 20: Updated timer pull model on top of timer idle rework >> >> >> The queue is available here: >> >> git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel timers/pushpull >> >> >> Move marking timer bases as idle into tick_nohz_stop_tick() >> ----------------------------------------------------------- >> >> The idle marking of timer bases is done in get_next_timer_interrupt() >> whenever possible. The timer bases are idle, even if the tick will not be >> stopped. This lead to an IPI when a new first timer is enqueued remote. To >> prevent this, setting timer_base->in_idle flag is postponed to >> tick_nohz_stop_tick(). >> >> Furthermore this synchronizes the states of timer base is_idle and >> tick_stopped. With the timer pull model in place, also the idle state in >> the hierarchy of a CPU is synchronized with the other idle related states. >> >> >> Timer pull model >> ---------------- >> >> Placing timers at enqueue time on a target CPU based on dubious heuristics >> does not make any sense: >> >> 1) Most timer wheel timers are canceled or rearmed before they expire. >> >> 2) The heuristics to predict which CPU will be busy when the timer expires >> are wrong by definition. >> >> So placing the timers at enqueue wastes precious cycles. >> >> The proper solution to this problem is to always queue the timers on the >> local CPU and allow the non pinned timers to be pulled onto a busy CPU at >> expiry time. >> >> Therefore split the timer storage into local pinned and global timers: >> Local pinned timers are always expired on the CPU on which they have been >> queued. Global timers can be expired on any CPU. >> >> As long as a CPU is busy it expires both local and global timers. When a >> CPU goes idle it arms for the first expiring local timer. If the first >> expiring pinned (local) timer is before the first expiring movable timer, >> then no action is required because the CPU will wake up before the first >> movable timer expires. If the first expiring movable timer is before the >> first expiring pinned (local) timer, then this timer is queued into a idle >> timerqueue and eventually expired by some other active CPU. >> >> To avoid global locking the timerqueues are implemented as a hierarchy. The >> lowest level of the hierarchy holds the CPUs. The CPUs are associated to >> groups of 8, which are separated per node. If more than one CPU group >> exist, then a second level in the hierarchy collects the groups. Depending >> on the size of the system more than 2 levels are required. Each group has a >> "migrator" which checks the timerqueue during the tick for remote timers to >> be expired. >> >> If the last CPU in a group goes idle it reports the first expiring event in >> the group up to the next group(s) in the hierarchy. If the last CPU goes >> idle it arms its timer for the first system wide expiring timer to ensure >> that no timer event is missed. >> >> >> Testing >> ~~~~~~~ >> >> Enqueue >> ^^^^^^^ >> >> The impact of wasting cycles during enqueue by using the heuristic in >> contrast to always queuing the timer on the local CPU was measured with a >> micro benchmark. Therefore a timer is enqueued and dequeued in a loop with >> 1000 repetitions on a isolated CPU. The time the loop takes is measured. A >> quarter of the remaining CPUs was kept busy. This measurement was repeated >> several times. With the patch queue the average duration was reduced by >> approximately 25%. >> >> 145ns plain v6 >> 109ns v6 with patch queue >> >> >> Furthermore the impact of residence in deep idle states of an idle system >> was investigated. The patch queue doesn't downgrade this behavior. >> >> dbench test >> ^^^^^^^^^^^ >> >> A dbench test starting X pairs of client servers are used to create load on >> the system. The measurable value is the throughput. The tests were executed >> on a zen3 machine. The base is the tip tree branch timers/core which is >> based on a v6.6-rc1. >> >> governor menu >> >> NR timers/core pull-model impact >> ---------------------------------------------- >> 1 353.19 (0.19) 353.45 (0.30) 0.07% >> 2 700.10 (0.96) 687.00 (0.20) -1.87% >> 4 1329.37 (0.63) 1282.91 (0.64) -3.49% >> 8 2561.16 (1.28) 2493.56 (1.76) -2.64% >> 16 4959.96 (0.80) 4914.59 (0.64) -0.91% >> 32 9741.92 (3.44) 8979.83 (1.13) -7.82% >> 64 16535.40 (2.84) 16388.47 (4.02) -0.89% >> 128 22136.83 (2.42) 23174.50 (1.43) 4.69% >> 256 39256.77 (4.48) 38994.00 (0.39) -0.67% >> 512 36799.03 (1.83) 38091.10 (0.63) 3.51% >> 1024 32903.03 (0.86) 35370.70 (0.89) 7.50% >> >> >> governor teo >> >> NR timers/core pull-model impact >> ---------------------------------------------- >> 1 350.83 (1.27) 352.45 (0.96) 0.46% >> 2 699.52 (0.85) 690.10 (0.54) -1.35% >> 4 1339.53 (1.99) 1294.71 (2.71) -3.35% >> 8 2574.10 (0.76) 2495.46 (1.97) -3.06% >> 16 4898.50 (1.74) 4783.06 (1.64) -2.36% >> 32 9115.50 (4.63) 9037.83 (1.58) -0.85% >> 64 16663.90 (3.80) 16042.00 (1.72) -3.73% >> 128 25044.93 (1.11) 23250.03 (1.08) -7.17% >> 256 38059.53 (1.70) 39658.57 (2.98) 4.20% >> 512 36369.30 (0.39) 38890.13 (0.36) 6.93% >> 1024 33956.83 (1.14) 35514.83 (0.29) 4.59% >> >> >> >> Ping Pong Oberservation >> ^^^^^^^^^^^^^^^^^^^^^^^ >> >> During testing on a mostly idle machine a ping pong game could be observed: >> a process_timeout timer is expired remotely on a non idle CPU. Then the CPU >> where the schedule_timeout() was executed to enqueue the timer comes out of >> idle and restarts the timer using schedule_timeout() and goes back to idle >> again. This is due to the fair scheduler which tries to keep the task on >> the CPU which it previously executed on. >> >> >> >> >> Possible Next Steps >> ~~~~~~~~~~~~~~~~~~~ >> >> Simple deferrable timers are no longer required as they can be converted to >> global timers. If a CPU goes idle, a formerly deferrable timer will not >> prevent the CPU to sleep as long as possible. Only the last migrator CPU >> has to take care of them. Deferrable timers with timer pinned flags needs >> to be expired on the specified CPU but must not prevent CPU from going >> idle. They require their own timer base which is never taken into account >> when calculating the next expiry time. This conversation and required >> cleanup will be done in a follow up series. >> >> >> v9..v10: https://lore.kernel.org/r/20231201092654.34614-1-anna-maria@linutronix.de/ >> - Address review Feedback of Bigeasy >> >> >> v8..v9: https://lore.kernel.org/r/20231004123454.15691-1-anna-maria@linutronix.de >> - Address review feedback >> - Add more minor cleanup fixes >> - fixes inconsistent idle related states >> >> >> v7..v8: https://lore.kernel.org/r/20230524070629.6377-1-anna-maria@linutronix.de >> - Address review feedback >> - Move marking timer base idle into tick_nohz_stop_tick() >> - Look ahead function to determine possible sleep lenght >> >> >> v6..v7: >> - Address review feedback of Frederic and bigeasy >> - Change lock, unlock fetch next timer interrupt logic after remote expiry >> - Move timer_expire_remote() into tick-internal.h >> - Add documentation section about "Required event and timerqueue update >> after remote expiry" >> - Fix fallout of kernel test robot >> >> >> v5..v6: >> >> - Address review of Frederic Weisbecker and Peter Zijlstra (spelling, >> locking, race in tmigr_handle_remote_cpu()) >> >> - unconditionally set TIMER_PINNED flag in add_timer_on(); introduce >> add_timer() variants which set/unset TIMER_PINNED flag; drop fixing >> add_timer_on() call sites, as TIMER_PINNED flag is set implicitly; >> Fixing workqueue to use add_timer_global() instead of simply >> add_timer() for unbound work. >> >> - Drop support for siblings to end up in the same level 0 group (could be >> added again in a better way as an improvement later on) >> >> - Do not send IPI for new first deferrable timers >> >> v4..v5: >> - address review feedback of Frederic Weisbecker >> - fix issue with group timer update after remote expiry >> >> v3..v4: >> - address review feedback of Frederic Weisbecker >> - address kernel test robot fallout >> - Move patch 16 "add_timer_on(): Make sure callers have TIMER_PINNED >> flag" at the begin of the queue to prevent timers to end up in global >> timer base when they were queued using add_timer_on() >> - Fix some comments and typos >> >> v2..v3: https://lore.kernel.org/r/20170418111102.490432548@linutronix.de/ >> - Minimize usage of locks by storing data using atomic_cmpxchg() for >> migrator information and information about active cpus. >> >> >> Thanks, >> >> Anna-Maria >> >> >> >> >> Anna-Maria Behnsen (18): >> timers: Restructure get_next_timer_interrupt() >> timers: Split out get next timer interrupt >> timers: Move marking timer bases idle into tick_nohz_stop_tick() >> timers: Optimization for timer_base_try_to_set_idle() >> timers: Introduce add_timer() variants which modify timer flags >> workqueue: Use global variant for add_timer() >> timers: add_timer_on(): Make sure TIMER_PINNED flag is set >> timers: Ease code in run_local_timers() >> timers: Split next timer interrupt logic >> timers: Keep the pinned timers separate from the others >> timers: Retrieve next expiry of pinned/non-pinned timers separately >> timers: Split out "get next timer interrupt" functionality >> timers: Add get next timer interrupt functionality for remote CPUs >> timers: Check if timers base is handled already >> timers: Introduce function to check timer base is_idle flag >> timers: Implement the hierarchical pull model >> timer_migration: Add tracepoints >> timers: Always queue timers on the local CPU >> >> Richard Cochran (linutronix GmbH) (2): >> timers: Restructure internal locking >> tick/sched: Split out jiffies update helper function >> >> MAINTAINERS | 1 + >> include/linux/cpuhotplug.h | 1 + >> include/linux/timer.h | 16 +- >> include/trace/events/timer_migration.h | 297 +++++ >> kernel/time/Makefile | 3 + >> kernel/time/tick-internal.h | 14 + >> kernel/time/tick-sched.c | 65 +- >> kernel/time/timer.c | 505 +++++-- >> kernel/time/timer_migration.c | 1693 ++++++++++++++++++++++++ >> kernel/time/timer_migration.h | 147 ++ >> kernel/workqueue.c | 2 +- >> 11 files changed, 2629 insertions(+), 115 deletions(-) >> create mode 100644 include/trace/events/timer_migration.h >> create mode 100644 kernel/time/timer_migration.c >> create mode 100644 kernel/time/timer_migration.h > > Hi Anna-Maria, > I did some quick measurements on a pixel6 Android 14 with 6.6 kernel baseline. > The workload is 5 iterations of uibenchjanktests (~70 Min runtime total). > Backport of timers/pushpull up to: > 6b7e23d1f495 ("timers: Always queue timers on the local CPU"). > > Power: > +------------+--------+------------+-------+-----------+ > | channel | metric | tag | value | perc_diff | > +------------+--------+------------+-------+-----------+ > | CPU | gmean | mainline_5 | 196.6 | 0.0% | > | CPU-Big | gmean | mainline_5 | 65.3 | 0.0% | > | CPU-Little | gmean | mainline_5 | 99.6 | 0.0% | > | CPU-Mid | gmean | mainline_5 | 31.6 | 0.0% | > | GPU | gmean | mainline_5 | 36.7 | 0.0% | > | Total | gmean | mainline_5 | 233.3 | 0.0% | > | CPU | gmean | pushpull_5 | 195.9 | -0.35% | > | CPU-Big | gmean | pushpull_5 | 64.8 | -0.85% | > | CPU-Little | gmean | pushpull_5 | 98.5 | -1.12% | > | CPU-Mid | gmean | pushpull_5 | 32.6 | 3.13% | > | GPU | gmean | pushpull_5 | 36.8 | 0.19% | > | Total | gmean | pushpull_5 | 232.6 | -0.26% | > +------------+--------+------------+-------+-----------+ > (Slightly skewed in favor of mainline because of starting > temperature.) > > Idle residency: > +------------+---------+------------+--------+ > | tag | cluster | idle_state | time | > +------------+---------+------------+--------+ > | mainline_5 | little | -1.0 | 518.42 | > | mainline_5 | little | 0.0 | 238.28 | > | mainline_5 | little | 1.0 | 19.7 | > | mainline_5 | mid | -1.0 | 201.0 | > | mainline_5 | mid | 0.0 | 335.26 | > | mainline_5 | mid | 1.0 | 240.15 | > | mainline_5 | big | -1.0 | 173.86 | > | mainline_5 | big | 0.0 | 330.93 | > | mainline_5 | big | 1.0 | 271.61 | > | pushpull_5 | little | -1.0 | 526.45 | > | pushpull_5 | little | 0.0 | 257.77 | > | pushpull_5 | little | 1.0 | 5.18 | > | pushpull_5 | mid | -1.0 | 220.98 | > | pushpull_5 | mid | 0.0 | 347.43 | > | pushpull_5 | mid | 1.0 | 220.98 | > | pushpull_5 | big | -1.0 | 177.36 | > | pushpull_5 | big | 0.0 | 331.61 | > | pushpull_5 | big | 1.0 | 280.42 | > +------------+---------+------------+--------+ > > We can see the improvement we were hoping for: > Longer idle times on the big cores. This is good to know, that it works and I didn't break this in the meantime :) Thanks a lot for testing! Anna-Maria
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +static void tmigr_connect_child_parent(struct tmigr_group *child, > + struct tmigr_group *parent) > +{ > + union tmigr_state childstate; > + > + raw_spin_lock_irq(&child->lock); > + raw_spin_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); > + > + child->parent = parent; > + child->childmask = BIT(parent->num_children++); > + > + raw_spin_unlock(&parent->lock); > + raw_spin_unlock_irq(&child->lock); > + > + /* > + * To prevent inconsistent states, active children need to be active in > + * the new parent as well. Inactive children are already marked inactive > + * in the parent group. > + */ > + childstate.state = atomic_read(&child->migr_state); > + if (childstate.migrator != TMIGR_NONE) { Is it possible here to connect a running online child (not one that we just created) to a new parent? If not, is it possible that a newly created child is not TMIGR_NONE? > + struct tmigr_walk data; > + > + data.childmask = child->childmask; > + > + /* > + * There is only one new level per time. When connecting the > + * child and the parent and set the child active when the parent > + * is inactive, the parent needs to be the uppermost > + * level. Otherwise there went something wrong! > + */ > + WARN_ON(!tmigr_active_up(parent, child, &data) && parent->parent); > + } > +} [...] > +static int tmigr_cpu_online(unsigned int cpu) > +{ > + struct tmigr_cpu *tmc = this_cpu_ptr(&tmigr_cpu); > + int ret; > + > + /* First online attempt? Initialize CPU data */ > + if (!tmc->tmgroup) { > + raw_spin_lock_init(&tmc->lock); > + > + ret = tmigr_add_cpu(cpu); > + if (ret < 0) > + return ret; > + > + if (tmc->childmask == 0) > + return -EINVAL; > + > + timerqueue_init(&tmc->cpuevt.nextevt); > + tmc->cpuevt.nextevt.expires = KTIME_MAX; > + tmc->cpuevt.ignore = true; > + tmc->cpuevt.cpu = cpu; > + > + tmc->remote = false; > + WRITE_ONCE(tmc->wakeup, KTIME_MAX); > + } > + raw_spin_lock_irq(&tmc->lock); > + tmc->idle = timer_base_is_idle(); > + if (!tmc->idle) > + __tmigr_cpu_activate(tmc); Heh, I was about to say that it's impossible that timer_base_is_idle() at this stage but actually if we run in nohz_full... It happens so that nohz_full is deactivated until rcutree_online_cpu() which calls tick_dep_clear() but it's a pure coincidence that might disappear one day. So yes, let's keep it that way. Thanks. > + tmc->online = true; > + raw_spin_unlock_irq(&tmc->lock); > + return 0; > +}
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +static bool tmigr_handle_remote_up(struct tmigr_group *group, > + struct tmigr_group *child, > + void *ptr) > +{ > + struct tmigr_remote_data *data = ptr; > + u64 now, next = KTIME_MAX; > + struct tmigr_event *evt; > + unsigned long jif; > + u8 childmask; > + > + jif = data->basej; > + now = data->now; > + > + childmask = data->childmask; > + > +again: > + /* > + * Handle the group only if @childmask is the migrator or if the > + * group has no migrator. Otherwise the group is active and is > + * handled by its own migrator. > + */ > + if (!tmigr_check_migrator(group, childmask)) > + return true; > + > + raw_spin_lock_irq(&group->lock); > + > + evt = tmigr_next_expired_groupevt(group, now); > + > + if (evt) { > + unsigned int remote_cpu = evt->cpu; > + > + raw_spin_unlock_irq(&group->lock); > + > + next = tmigr_handle_remote_cpu(remote_cpu, now, jif); > + > + /* check if there is another event, that needs to be handled */ > + goto again; > + } else { > + raw_spin_unlock_irq(&group->lock); > + } > + > + /* > + * Update of childmask for the next level and keep track of the expiry > + * of the first event that needs to be handled > + */ > + data->childmask = group->childmask; > + data->firstexp = next; So assume we have: [GRP1:0] migrator = [GRP0:0] active = [GRP0:0] nextevt = TIMER3 / \ [GRP0:0] [GRP0:1] migrator = CPU0 migrator = NONE active = CPU0 active = NONE nextevt = KTIME_MAX nextevt = TIMER3 / \ / \ 0 1 2 3 idle idle idle idle (TIMER3) Then CPU 0 goes idle: [GRP1:0] migrator = NONE active = NONE nextevt = TIMER3 / \ [GRP0:0] [GRP0:1] migrator = NONE migrator = NONE active = NONE active = NONE nextevt = KTIME_MAX nextevt = TIMER3 / \ / \ 0 1 2 3 idle idle idle idle (TIMER3) CPU 0 is the idle migrator and its tmc->wakeup is TIMER3. But CPU 0 has a local timer that expires before TIMER3. When that timer interrupt fires, it raises the softirq, which executes on IRQ tail. So CPU0 eventually calls tmigr_handle_remote() before TIMER3 has expired. This leads to tmigr_next_expired_groupevt() to return NULL and then data->firstexp = KTIME_MAX and then tmc->wakeup = KTIME_MAX. Later on, tmigr_new_timer() is called with a KTIME_MAX global event and so tmc->wakeup stays with KTIME_MAX, ignoring TIMER3. It looks like you need to handle the tmigr_next_expired_groupevt() case returning NULL. Thanks.
Le Thu, Feb 01, 2024 at 05:15:37PM +0100, Anna-Maria Behnsen a écrit : > Frederic Weisbecker <frederic@kernel.org> writes: > > > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > >> +static void tmigr_connect_child_parent(struct tmigr_group *child, > >> + struct tmigr_group *parent) > >> +{ > >> + union tmigr_state childstate; > >> + > >> + raw_spin_lock_irq(&child->lock); > >> + raw_spin_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); > >> + > >> + child->parent = parent; > >> + child->childmask = BIT(parent->num_children++); > >> + > >> + raw_spin_unlock(&parent->lock); > >> + raw_spin_unlock_irq(&child->lock); > >> + > >> + /* > >> + * To prevent inconsistent states, active children need to be active in > >> + * the new parent as well. Inactive children are already marked inactive > >> + * in the parent group. > >> + */ > >> + childstate.state = atomic_read(&child->migr_state); > >> + if (childstate.migrator != TMIGR_NONE) { > > > > Is it possible here to connect a running online child (not one that we just > > created) to a new parent? > > connect_child_parent() is only executed for the just created ones. So, > yes in theory this would be possible, but it doesn't happen as > tmigr_setup_groups() takes care to make it right (hopefully :)). When a > LVL0 group has some space left, only the connection between tmc and the > LVL0 group is done in tmigr_setup_groups(). If there is no space left in > LVL0 group, then a new group is created and depending on the levels > which has to be created only executed for the new ones. > > > If not, is it possible that a newly created child is > > not TMIGR_NONE? > > Yes. See tmigr_cpu_online(). When new groups have to be created starting > from LVL0, then they are not active - so TMIGR_NONE is set. Activating > the new online CPU is done afterwards. > > But if it is required to add also a new level at the top, then it is > mandatory to propagate the active state of the already existing child to > the new parent. The connect_child_parent() is then also executed for the > formerly top level group (child) to the newly created group (parent). Ah and this is why we have the "if (childstate.migrator != TMIGR_NONE)" branch, right? > > Heh, I was about to say that it's impossible that timer_base_is_idle() > > at this stage but actually if we run in nohz_full... > > > > It happens so that nohz_full is deactivated until rcutree_online_cpu() > > which calls tick_dep_clear() but it's a pure coincidence that might > > disappear one day. So yes, let's keep it that way. > > I instrumented the code (with NOHZ FULL and NOHZ_IDLE) to make sure the > timer migration hierarchy state 'idle' is in sync with the timer base > 'idle'. And this was one part where it was possible that it runs out of > sync as I remember correctly. But if I understood you correctly, this > shouldn't happen at the moment? Well, it's not supposed to :-) Thanks. > > Thanks, > > Anna-Maria >
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +/* > + * Returns true, if there is nothing to be propagated to the next level > + * > + * @data->firstexp is set to expiry of first gobal event of the (top level of > + * the) hierarchy, but only when hierarchy is completely idle. > + * > + * This is the only place where the group event expiry value is set. > + */ > +static > +bool tmigr_update_events(struct tmigr_group *group, struct tmigr_group *child, > + struct tmigr_walk *data, union tmigr_state childstate, > + union tmigr_state groupstate) > +{ > + struct tmigr_event *evt, *first_childevt; > + bool walk_done, remote = data->remote; > + bool leftmost_change = false; > + u64 nextexp; > + > + if (child) { > + raw_spin_lock(&child->lock); > + raw_spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING); > + > + if (childstate.active) { > + walk_done = true; > + goto unlock; > + } > + > + first_childevt = tmigr_next_groupevt(child); > + nextexp = child->next_expiry; > + evt = &child->groupevt; > + } else { > + nextexp = data->nextexp; > + > + first_childevt = evt = data->evt; > + > + /* > + * Walking the hierarchy is required in any case when a > + * remote expiry was done before. This ensures to not lose > + * already queued events in non active groups (see section > + * "Required event and timerqueue update after a remote > + * expiry" in the documentation at the top). > + * > + * The two call sites which are executed without a remote expiry > + * before, are not prevented from propagating changes through > + * the hierarchy by the return: > + * - When entering this path by tmigr_new_timer(), @evt->ignore > + * is never set. > + * - tmigr_inactive_up() takes care of the propagation by > + * itself and ignores the return value. But an immediate > + * return is required because nothing has to be done in this > + * level as the event could be ignored. > + */ > + if (evt->ignore && !remote) > + return true; > + > + raw_spin_lock(&group->lock); > + } > + > + if (nextexp == KTIME_MAX) { > + evt->ignore = true; > + > + /* > + * When the next child event could be ignored (nextexp is > + * KTIME_MAX) and there was no remote timer handling before or > + * the group is already active, there is no need to walk the > + * hierarchy even if there is a parent group. > + * > + * The other way round: even if the event could be ignored, but > + * if a remote timer handling was executed before and the group > + * is not active, walking the hierarchy is required to not miss > + * an enqueued timer in the non active group. The enqueued timer > + * of the group needs to be propagated to a higher level to > + * ensure it is handled. > + */ > + if (!remote || groupstate.active) { > + walk_done = true; > + goto unlock; > + } > + } else { > + /* > + * An update of @evt->cpu and @evt->ignore flag is required only > + * when @child is set (the child is equal or higher than lvl0), > + * but it doesn't matter if it is written once more to the per > + * CPU event; make the update unconditional. > + */ > + evt->cpu = first_childevt->cpu; > + evt->ignore = false; > + } > + > + walk_done = !group->parent; > + > + /* > + * If the child event is already queued in the group, remove it from the > + * queue when the expiry time changed only. > + */ > + if (timerqueue_node_queued(&evt->nextevt)) { > + if (evt->nextevt.expires == nextexp) > + goto check_toplvl; > + > + leftmost_change = timerqueue_getnext(&group->events) == &evt->nextevt; > + if (!timerqueue_del(&group->events, &evt->nextevt)) > + WRITE_ONCE(group->next_expiry, KTIME_MAX); > + } > + > + evt->nextevt.expires = nextexp; > + > + if (timerqueue_add(&group->events, &evt->nextevt)) { > + leftmost_change = true; > + WRITE_ONCE(group->next_expiry, nextexp); > + } > + > +check_toplvl: > + if (walk_done && (groupstate.migrator == TMIGR_NONE)) { > + /* > + * Nothing to do when first event didn't changed and update was > + * done during remote timer handling. > + */ > + if (remote && !leftmost_change) So if the first timer in the list hasn't changed, and that first timer belongs to another CPU (and another group) than the tmc for which we are remotely handling timers and re-propagating timers up, then data->firstexp will be after the leftmost timer expiration (data->firstexp could even be KTIME_MAX in the worst case), and so will be tmc->wakeup for the caller of tmigr_handle_remote()? Thanks. > + goto unlock; > + /* > + * The top level group is idle and it has to be ensured the > + * global timers are handled in time. (This could be optimized > + * by keeping track of the last global scheduled event and only > + * arming it on the CPU if the new event is earlier. Not sure if > + * its worth the complexity.) > + */ > + data->firstexp = tmigr_next_groupevt_expires(group); > + } > + > +unlock: > + raw_spin_unlock(&group->lock); > + > + if (child) > + raw_spin_unlock(&child->lock); > + > + return walk_done; > +}
Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > +/* > + * Returns true, if there is nothing to be propagated to the next level > + * > + * @data->firstexp is set to expiry of first gobal event of the (top level of > + * the) hierarchy, but only when hierarchy is completely idle. > + * > + * This is the only place where the group event expiry value is set. > + */ > +static > +bool tmigr_update_events(struct tmigr_group *group, struct tmigr_group *child, > + struct tmigr_walk *data, union tmigr_state childstate, > + union tmigr_state groupstate) > +{ > + struct tmigr_event *evt, *first_childevt; > + bool walk_done, remote = data->remote; > + bool leftmost_change = false; > + u64 nextexp; > + > + if (child) { > + raw_spin_lock(&child->lock); > + raw_spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING); > + > + if (childstate.active) { > + walk_done = true; > + goto unlock; > + } > + > + first_childevt = tmigr_next_groupevt(child); > + nextexp = child->next_expiry; > + evt = &child->groupevt; > + } else { > + nextexp = data->nextexp; > + > + first_childevt = evt = data->evt; > + > + /* > + * Walking the hierarchy is required in any case when a > + * remote expiry was done before. This ensures to not lose > + * already queued events in non active groups (see section > + * "Required event and timerqueue update after a remote > + * expiry" in the documentation at the top). > + * > + * The two call sites which are executed without a remote expiry > + * before, are not prevented from propagating changes through > + * the hierarchy by the return: > + * - When entering this path by tmigr_new_timer(), @evt->ignore > + * is never set. > + * - tmigr_inactive_up() takes care of the propagation by > + * itself and ignores the return value. But an immediate > + * return is required because nothing has to be done in this > + * level as the event could be ignored. > + */ > + if (evt->ignore && !remote) > + return true; > + > + raw_spin_lock(&group->lock); > + } > + > + if (nextexp == KTIME_MAX) { > + evt->ignore = true; > + > + /* > + * When the next child event could be ignored (nextexp is > + * KTIME_MAX) and there was no remote timer handling before or > + * the group is already active, there is no need to walk the > + * hierarchy even if there is a parent group. > + * > + * The other way round: even if the event could be ignored, but > + * if a remote timer handling was executed before and the group > + * is not active, walking the hierarchy is required to not miss > + * an enqueued timer in the non active group. The enqueued timer > + * of the group needs to be propagated to a higher level to > + * ensure it is handled. > + */ > + if (!remote || groupstate.active) { > + walk_done = true; > + goto unlock; So if the current tmc going inactive was the migrator for the whole hierarchy and it is reaching here the top-level, this assumes that if none of this tmc's groups have a timer, then it can just return. But what if the top level has timers from other children? Who is going to handle them then? Should this be "goto check_toplvl" instead? Thanks. > + } > + } else { > + /* > + * An update of @evt->cpu and @evt->ignore flag is required only > + * when @child is set (the child is equal or higher than lvl0), > + * but it doesn't matter if it is written once more to the per > + * CPU event; make the update unconditional. > + */ > + evt->cpu = first_childevt->cpu; > + evt->ignore = false; > + } > + > + walk_done = !group->parent; > + > + /* > + * If the child event is already queued in the group, remove it from the > + * queue when the expiry time changed only. > + */ > + if (timerqueue_node_queued(&evt->nextevt)) { > + if (evt->nextevt.expires == nextexp) > + goto check_toplvl; > + > + leftmost_change = timerqueue_getnext(&group->events) == &evt->nextevt; > + if (!timerqueue_del(&group->events, &evt->nextevt)) > + WRITE_ONCE(group->next_expiry, KTIME_MAX); > + } > + > + evt->nextevt.expires = nextexp; > + > + if (timerqueue_add(&group->events, &evt->nextevt)) { > + leftmost_change = true; > + WRITE_ONCE(group->next_expiry, nextexp); > + } > + > +check_toplvl: > + if (walk_done && (groupstate.migrator == TMIGR_NONE)) { > + /* > + * Nothing to do when first event didn't changed and update was > + * done during remote timer handling. > + */ > + if (remote && !leftmost_change) > + goto unlock; > + /* > + * The top level group is idle and it has to be ensured the > + * global timers are handled in time. (This could be optimized > + * by keeping track of the last global scheduled event and only > + * arming it on the CPU if the new event is earlier. Not sure if > + * its worth the complexity.) > + */ > + data->firstexp = tmigr_next_groupevt_expires(group); > + } > + > +unlock: > + raw_spin_unlock(&group->lock); > + > + if (child) > + raw_spin_unlock(&child->lock); > + > + return walk_done; > +}
Le Mon, Feb 05, 2024 at 04:59:45PM +0100, Anna-Maria Behnsen a écrit : > Frederic Weisbecker <frederic@kernel.org> writes: > > > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > >> +static bool tmigr_handle_remote_up(struct tmigr_group *group, > >> + struct tmigr_group *child, > >> + void *ptr) > >> +{ > >> + struct tmigr_remote_data *data = ptr; > >> + u64 now, next = KTIME_MAX; > >> + struct tmigr_event *evt; > >> + unsigned long jif; > >> + u8 childmask; > >> + > >> + jif = data->basej; > >> + now = data->now; > >> + > >> + childmask = data->childmask; > >> + > >> +again: > >> + /* > >> + * Handle the group only if @childmask is the migrator or if the > >> + * group has no migrator. Otherwise the group is active and is > >> + * handled by its own migrator. > >> + */ > >> + if (!tmigr_check_migrator(group, childmask)) > >> + return true; > >> + > >> + raw_spin_lock_irq(&group->lock); > >> + > >> + evt = tmigr_next_expired_groupevt(group, now); > >> + > >> + if (evt) { > >> + unsigned int remote_cpu = evt->cpu; > >> + > >> + raw_spin_unlock_irq(&group->lock); > >> + > >> + next = tmigr_handle_remote_cpu(remote_cpu, now, jif); > >> + > >> + /* check if there is another event, that needs to be handled */ > >> + goto again; > >> + } else { > >> + raw_spin_unlock_irq(&group->lock); > >> + } > >> + > >> + /* > >> + * Update of childmask for the next level and keep track of the expiry > >> + * of the first event that needs to be handled > >> + */ > >> + data->childmask = group->childmask; > >> + data->firstexp = next; > > > > So assume we have: > > > > [GRP1:0] > > migrator = [GRP0:0] > > active = [GRP0:0] > > nextevt = TIMER3 > > / \ > > [GRP0:0] [GRP0:1] > > migrator = CPU0 migrator = NONE > > active = CPU0 active = NONE > > nextevt = KTIME_MAX nextevt = TIMER3 > > / \ / \ > > 0 1 2 3 > > idle idle idle idle (TIMER3) > > > > Then CPU 0 goes idle: > > > > [GRP1:0] > > migrator = NONE > > active = NONE > > nextevt = TIMER3 > > / \ > > [GRP0:0] [GRP0:1] > > migrator = NONE migrator = NONE > > active = NONE active = NONE > > nextevt = KTIME_MAX nextevt = TIMER3 > > / \ / \ > > 0 1 2 3 > > idle idle idle idle (TIMER3) > > > > CPU 0 is the idle migrator and its tmc->wakeup is TIMER3. > > But CPU 0 has a local timer that expires before TIMER3. > > > > When that timer interrupt fires, it raises the softirq, which > > executes on IRQ tail. So CPU0 eventually calls tmigr_handle_remote() > > before TIMER3 has expired. > > > > This leads to tmigr_next_expired_groupevt() to return NULL and then > > data->firstexp = KTIME_MAX and then tmc->wakeup = KTIME_MAX. > > > > Later on, tmigr_new_timer() is called with a KTIME_MAX global > > event and so tmc->wakeup stays with KTIME_MAX, ignoring TIMER3. > > > > It looks like you need to handle the tmigr_next_expired_groupevt() > > case returning NULL. > > Yes. You are right. group->next_expiry is updated via > tmigr_next_expired_groupevt(). So I should take this value to rely on > for the firstevt. > > But with this, we do not need the return value of > tmigr_handle_remote_cpu(). When the upperst level is inactive and there > is a timer (e.g. propagated there by tmigr_handle_remote_cpu()), the > return value of tmigr_handle_remote_cpu() is the next_expiry of the top > level group. But this value is also noticed by walking the hierarchy up > to the top level with tmigr_handle_remote_up(). Indeed that's a nice simplification! This also cure an issue with KTIME_MAX overwriting data->firstexp in tmigr_handle_remote_up() while walking upper level after an expiration on below level. Thanks.
Le Tue, Feb 06, 2024 at 12:03:32PM +0100, Anna-Maria Behnsen a écrit : > Frederic Weisbecker <frederic@kernel.org> writes: > > > Le Mon, Jan 15, 2024 at 03:37:41PM +0100, Anna-Maria Behnsen a écrit : > >> +/* > >> + * Returns true, if there is nothing to be propagated to the next level > >> + * > >> + * @data->firstexp is set to expiry of first gobal event of the (top level of > >> + * the) hierarchy, but only when hierarchy is completely idle. > >> + * > >> + * This is the only place where the group event expiry value is set. > >> + */ > >> +static > >> +bool tmigr_update_events(struct tmigr_group *group, struct tmigr_group *child, > >> + struct tmigr_walk *data, union tmigr_state childstate, > >> + union tmigr_state groupstate) > >> +{ > >> + struct tmigr_event *evt, *first_childevt; > >> + bool walk_done, remote = data->remote; > >> + bool leftmost_change = false; > >> + u64 nextexp; > >> + > >> + if (child) { > >> + raw_spin_lock(&child->lock); > >> + raw_spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING); > >> + > >> + if (childstate.active) { > >> + walk_done = true; > >> + goto unlock; > >> + } > >> + > >> + first_childevt = tmigr_next_groupevt(child); > >> + nextexp = child->next_expiry; > >> + evt = &child->groupevt; > >> + } else { > >> + nextexp = data->nextexp; > >> + > >> + first_childevt = evt = data->evt; > >> + > >> + /* > >> + * Walking the hierarchy is required in any case when a > >> + * remote expiry was done before. This ensures to not lose > >> + * already queued events in non active groups (see section > >> + * "Required event and timerqueue update after a remote > >> + * expiry" in the documentation at the top). > >> + * > >> + * The two call sites which are executed without a remote expiry > >> + * before, are not prevented from propagating changes through > >> + * the hierarchy by the return: > >> + * - When entering this path by tmigr_new_timer(), @evt->ignore > >> + * is never set. > >> + * - tmigr_inactive_up() takes care of the propagation by > >> + * itself and ignores the return value. But an immediate > >> + * return is required because nothing has to be done in this > >> + * level as the event could be ignored. > >> + */ > >> + if (evt->ignore && !remote) > >> + return true; > >> + > >> + raw_spin_lock(&group->lock); > >> + } > >> + > >> + if (nextexp == KTIME_MAX) { > >> + evt->ignore = true; > >> + > >> + /* > >> + * When the next child event could be ignored (nextexp is > >> + * KTIME_MAX) and there was no remote timer handling before or > >> + * the group is already active, there is no need to walk the > >> + * hierarchy even if there is a parent group. > >> + * > >> + * The other way round: even if the event could be ignored, but > >> + * if a remote timer handling was executed before and the group > >> + * is not active, walking the hierarchy is required to not miss > >> + * an enqueued timer in the non active group. The enqueued timer > >> + * of the group needs to be propagated to a higher level to > >> + * ensure it is handled. > >> + */ > >> + if (!remote || groupstate.active) { > >> + walk_done = true; > >> + goto unlock; > >> + } > >> + } else { > >> + /* > >> + * An update of @evt->cpu and @evt->ignore flag is required only > >> + * when @child is set (the child is equal or higher than lvl0), > >> + * but it doesn't matter if it is written once more to the per > >> + * CPU event; make the update unconditional. > >> + */ > >> + evt->cpu = first_childevt->cpu; > >> + evt->ignore = false; > >> + } > >> + > >> + walk_done = !group->parent; > >> + > >> + /* > >> + * If the child event is already queued in the group, remove it from the > >> + * queue when the expiry time changed only. > >> + */ > >> + if (timerqueue_node_queued(&evt->nextevt)) { > >> + if (evt->nextevt.expires == nextexp) > >> + goto check_toplvl; > >> + > >> + leftmost_change = timerqueue_getnext(&group->events) == &evt->nextevt; > >> + if (!timerqueue_del(&group->events, &evt->nextevt)) > >> + WRITE_ONCE(group->next_expiry, KTIME_MAX); > >> + } > >> + > >> + evt->nextevt.expires = nextexp; > >> + > >> + if (timerqueue_add(&group->events, &evt->nextevt)) { > >> + leftmost_change = true; > >> + WRITE_ONCE(group->next_expiry, nextexp); > >> + } > >> + > >> +check_toplvl: > >> + if (walk_done && (groupstate.migrator == TMIGR_NONE)) { > >> + /* > >> + * Nothing to do when first event didn't changed and update was > >> + * done during remote timer handling. > >> + */ > >> + if (remote && !leftmost_change) > > > > So if the first timer in the list hasn't changed, and that first timer belongs > > to another CPU (and another group) than the tmc for which we are remotely > > handling timers and re-propagating timers up, then data->firstexp will be > > after the leftmost timer expiration (data->firstexp could even be KTIME_MAX > > in the worst case), and so will be tmc->wakeup for the caller of > > tmigr_handle_remote()? > > > > This is related to the discussion regarding tmigr_handle_remote_up(). So > this should be also covered by the change I proposed there. > > And then we definitely do not need the update of data->firstevt here, as > we are still on the way to top to make sure all events are handled. And > the first event which needs to be handled by the migrator CPU is set by > the call to tmigr_next_expired_groupevt(). Sounds good! Thanks. > > > > >> + goto unlock; > >> + /* > >> + * The top level group is idle and it has to be ensured the > >> + * global timers are handled in time. (This could be optimized > >> + * by keeping track of the last global scheduled event and only > >> + * arming it on the CPU if the new event is earlier. Not sure if > >> + * its worth the complexity.) > >> + */ > >> + data->firstexp = tmigr_next_groupevt_expires(group); > >> + } > >> + > >> +unlock: > >> + raw_spin_unlock(&group->lock); > >> + > >> + if (child) > >> + raw_spin_unlock(&child->lock); > >> + > >> + return walk_done; > >> +}