From patchwork Tue Dec 19 14:08:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 180961 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp1961899dyi; Tue, 19 Dec 2023 06:09:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IGdjOXDWmblA2HhUV6sU+iXiUqqM+RbSPIEuWynb68fufTrLfpPKKtc6/YBOakJr99KiiqR X-Received: by 2002:a17:906:2492:b0:a23:645b:932 with SMTP id e18-20020a170906249200b00a23645b0932mr1366471ejb.80.1702994962396; Tue, 19 Dec 2023 06:09:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702994962; cv=none; d=google.com; s=arc-20160816; b=s54+ytgyB+SLXcqCeUqWXcp8KptQC73K1NGgVJypVY9eVo/+9MZeIK+qaxJvE8PqF8 Odu9FkPRkDQIFic3M4OAc5dYpLBvs5coCUvWYkNDswShsh+ZROoVDAzLo9hYiHdCGfB1 MzltNDwUclW77S2wwj0ij37QOuu892YcDLF+/rcFbltbYiwcGGqAQZwNEN/1dw7PLVwl JFPq5IODfU6xmbok5UPBcTtGKmY+KE1q011dSUL5SBnz63lE887iXZnn+q58ZeN++Rpb xLvcgKPEvjGsiI3GCkoBN6tMuy+mF+B6xyB7RLW2xAPgbTQig4mV8NLxvQ9FOWmARx9Y y6Fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=PSdJblqwO5bFhfaunFUwnaBndxZz3LLx2k3Ah8p3UzM=; fh=fz/COWqCL2mhd10qMNrYgb22dx1hhMtjeWXLPjk4qag=; b=Mg/mWOj+ORRMvBBERkz5bdUy3yuBkVbiOL+Nv4/ExsRM5Rj6ufeyrEP5swz4DkU6Au sGrXkDZQzOC0Ux5mUdPt6u5nb4UaJA16K4LlRDzlnfiX8ZyhhconmuKNY7CVEoZHsL8E b3ojhpZvnZg79nktcYGeNWp2Cf+ec6p8ME2w0xAG2WpfLffTun7/8bhFD97/lv5I6N/v /LCKESsvI8/+ahk2t6is8aLPbyKn86zK8lTocASr5o+OSAGbgFVleFEY7u75X6+RmBcj H3lCmQTwFjAN+RrYrqfoyFBpZKimRqpztYbMzduA9Hk0p8/o52aZF6iHwwF8x/G+2LiI inRw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TU0dz90e; spf=pass (google.com: domain of linux-kernel+bounces-5320-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5320-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id r12-20020a170906c28c00b00a2353a78718si2047458ejz.995.2023.12.19.06.09.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 06:09:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5320-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=TU0dz90e; spf=pass (google.com: domain of linux-kernel+bounces-5320-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5320-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id C561C1F24FBD for ; Tue, 19 Dec 2023 14:09:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BD46A1C290; Tue, 19 Dec 2023 14:08:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TU0dz90e" X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F28DA1BDC4; Tue, 19 Dec 2023 14:08:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B556C433C8; Tue, 19 Dec 2023 14:08:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1702994934; bh=sHJt3IsM3iUK3gN9HVpHb6yt/cnAOvUK8korBBbdabk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TU0dz90eOUJ5pXT+op/xqPwl8wMUEWPhe3EEKoJVXmaTf6aU6j8FzRm2buDs1LnTS to3P9wsEdl2nGt7Ac4IAlg4ijbNaa8ZBEqUTL2DyN5UDpO99MtQ1hY3lO5AnB7K5nP e045XJ6RYl9RG+dhJbpoySi+OIOzSPTGvnlLZEUadzKm8hqDJ49859GLAvjM+NZ+3a 4yMAhPieON9jT1YU4YjjMSIOxytI5yp3tZEgyjHfpk0FxnDF6tWPhhosmQGvuJnPdk hDd7X8Gi5rVAtFG0Jvwaq83LvCVDKjHz0wShdBMJ/sZBcQjq4BaNiHxeXZJEjLzWOa mQSbf3rvwwljw== From: Frederic Weisbecker To: LKML Cc: Frederic Weisbecker , Boqun Feng , Joel Fernandes , Neeraj Upadhyay , "Paul E . McKenney" , Uladzislau Rezki , Zqiang , rcu , Hillf Danton , Neeraj Upadhyay Subject: [PATCH 1/8] rcu/nocb: Make IRQs disablement symmetric Date: Tue, 19 Dec 2023 15:08:36 +0100 Message-Id: <20231219140843.939329-2-frederic@kernel.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231219140843.939329-1-frederic@kernel.org> References: <20231219140843.939329-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785719645629959705 X-GMAIL-MSGID: 1785719645629959705 Currently IRQs are disabled on call_rcu() and then depending on the context: * If the CPU is in nocb mode: - If the callback is enqueued in the bypass list, IRQs are re-enabled implictly by rcu_nocb_try_bypass() - If the callback is enqueued in the normal list, IRQs are re-enabled implicitly by __call_rcu_nocb_wake() * If the CPU is NOT in nocb mode, IRQs are reenabled explicitly from call_rcu() This makes the code a bit hard to follow, especially as it interleaves with nocb locking. To make the IRQ flags coverage clearer and also in order to prepare for moving all the nocb enqueue code to its own function, always re-enable the IRQ flags explicitly from call_rcu(). Reviewed-by: Neeraj Upadhyay (AMD) Signed-off-by: Frederic Weisbecker --- kernel/rcu/tree.c | 9 ++++++--- kernel/rcu/tree_nocb.h | 20 +++++++++----------- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 499803234176..91b2eb772e86 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2735,8 +2735,10 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in) } check_cb_ovld(rdp); - if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) + if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags, lazy)) { + local_irq_restore(flags); return; // Enqueued onto ->nocb_bypass, so just leave. + } // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. rcu_segcblist_enqueue(&rdp->cblist, head); if (__is_kvfree_rcu_offset((unsigned long)func)) @@ -2754,8 +2756,8 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in) __call_rcu_nocb_wake(rdp, was_alldone, flags); /* unlocks */ } else { __call_rcu_core(rdp, head, flags); - local_irq_restore(flags); } + local_irq_restore(flags); } #ifdef CONFIG_RCU_LAZY @@ -4651,8 +4653,9 @@ void rcutree_migrate_callbacks(int cpu) __call_rcu_nocb_wake(my_rdp, true, flags); } else { rcu_nocb_unlock(my_rdp); /* irqs remain disabled. */ - raw_spin_unlock_irqrestore_rcu_node(my_rnp, flags); + raw_spin_unlock_rcu_node(my_rnp); /* irqs remain disabled. */ } + local_irq_restore(flags); if (needwake) rcu_gp_kthread_wake(); lockdep_assert_irqs_enabled(); diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index d82f96a66600..06c8ff85850c 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -532,9 +532,7 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, // 2. Both of these conditions are met: // a. The bypass list previously had only lazy CBs, and: // b. The new CB is non-lazy. - if (ncbs && (!bypass_is_lazy || lazy)) { - local_irq_restore(flags); - } else { + if (!ncbs || (bypass_is_lazy && !lazy)) { // No-CBs GP kthread might be indefinitely asleep, if so, wake. rcu_nocb_lock(rdp); // Rare during call_rcu() flood. if (!rcu_segcblist_pend_cbs(&rdp->cblist)) { @@ -544,7 +542,7 @@ static bool rcu_nocb_try_bypass(struct rcu_data *rdp, struct rcu_head *rhp, } else { trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("FirstBQnoWake")); - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); } } return true; // Callback already enqueued. @@ -570,7 +568,7 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone, // If we are being polled or there is no kthread, just leave. t = READ_ONCE(rdp->nocb_gp_kthread); if (rcu_nocb_poll || !t) { - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNotPoll")); return; @@ -583,17 +581,17 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone, rdp->qlen_last_fqs_check = len; // Only lazy CBs in bypass list if (lazy_len && bypass_len == lazy_len) { - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_LAZY, TPS("WakeLazy")); } else if (!irqs_disabled_flags(flags)) { /* ... if queue was empty ... */ - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); wake_nocb_gp(rdp, false); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeEmpty")); } else { - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE, TPS("WakeEmptyIsDeferred")); } @@ -611,15 +609,15 @@ static void __call_rcu_nocb_wake(struct rcu_data *rdp, bool was_alldone, if ((rdp->nocb_cb_sleep || !rcu_segcblist_ready_cbs(&rdp->cblist)) && !timer_pending(&rdp->nocb_timer)) { - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); wake_nocb_gp_defer(rdp, RCU_NOCB_WAKE_FORCE, TPS("WakeOvfIsDeferred")); } else { - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot")); } } else { - rcu_nocb_unlock_irqrestore(rdp, flags); + rcu_nocb_unlock(rdp); trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WakeNot")); } }