Message ID | 20221122010421.3799681-5-paulmck@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1932079wrr; Mon, 21 Nov 2022 17:10:12 -0800 (PST) X-Google-Smtp-Source: AA0mqf4jBR+q0OGngq5gfYCCT6QNOhecK95FyHXtzHQsRgk37i50pmpJcx2S1qwnEXGI9Rwbe8Za X-Received: by 2002:aa7:cc14:0:b0:468:58d4:a10e with SMTP id q20-20020aa7cc14000000b0046858d4a10emr19208805edt.222.1669079412096; Mon, 21 Nov 2022 17:10:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669079412; cv=none; d=google.com; s=arc-20160816; b=oKQ+NC1ofr0rVP/UCC82QPihp22NTrIp06EtYpa1OaLyeZTABPwl3P/T0eVg0LdvJz c0gq+h6ESuQr67jEME7r6tulp943UEh0jZ8xtTQgqfHAslu+kzaK/i/g8haDPEQM2+aN f456Pcukr4GkkiGnUWz/Wm87o8HkcL/zD5rwsrfpwf5jquk9nTqCF6o13V2f/T0NY6cy M+A6t8wXxw2MoCy7qRWKxKSZ2IACegScdFFHQhckARNoatXPOxPiO0xLQgRNqxx080TF RUtpu0wDYGbVhIEOIKFiQi84et/n1n6lJuhH30cFgYc3SMDJoN2A83JUYGZiZte6gdVL KDeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZtA1/GcSFcFiU1ppp5Q4H/NERsG7tztvT92JiROd1mE=; b=bUswySyl/rETOtc8qRPeg2eNGGPGqC0as0bMbp3i3WWR6lN07GdPxKamp/EbDt09by 4paOo7EmV6KGa/dpbGnPxQcCMOZs5CSxv3Dh+u4qQbMGlDeh4Zq9Eo0LyadB+6ec7LD5 0Wshk80AuPjGekoER8xqMCk2PqSR8BqpSc7V9/lGAJiT3BR+OOsZ/jbdI9VlSGG1F7Rt 3+GETW1KyIJQyIJJ0ce2y5G/hrwbmI58cySGdvkHO/oryHoZUyIZ9ABSaXjrDK6khf6s vof+Ty62oVHVs6FV2dQd4JZPMBqnd9zcn0p5jiFQUZ+bJ6gnIpn7hD19XnM4sqWhsXNY Rd/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=msgDCNIf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ss28-20020a170907c01c00b007ad79c4f58fsi8761757ejc.120.2022.11.21.17.09.48; Mon, 21 Nov 2022 17:10:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=msgDCNIf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232127AbiKVBFJ (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 20:05:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231970AbiKVBE0 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 20:04:26 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D8F615804; Mon, 21 Nov 2022 17:04:25 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 36A14B818E6; Tue, 22 Nov 2022 01:04:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C09FDC433C1; Tue, 22 Nov 2022 01:04:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669079062; bh=U3+4ieL/lmCXWeRZRiLB8u0S5rhXMcyArw0nT7Mm3tc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=msgDCNIf5wTHEMiqK1Oby1Kv9Mul5DyWDroLunYs3RPd6Q4u0lEwQZKrTci5eEb8g D3NdwkLIwRwI6jk7mIMqe8ZH/iMBjQIKoUAxKYnVJxFFjzd/gWwAgFKUW+BQiIcMaQ fMA6510Y+gWdcIfhQK17+Bh0yKljOgZ4RhZXP/BL/VD8U3Jr2whVpiM6gHRbs1wXoy z2JnJ8viIApXCPsicuQ8+9JM+QwsBoDnbkMs+M0qlwfhT8kx1/XcJFyAp8uKEcLGuL yWRCMfiBG4xzWEkPbjr8rXkELND3r3n7zETP0DjxmeBfT34P2t0T6klB+V7PgdumKq mNb4lVjLFgKCA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 799CB5C116D; Mon, 21 Nov 2022 17:04:22 -0800 (PST) From: "Paul E. McKenney" <paulmck@kernel.org> To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Joel Fernandes (Google)" <joel@joelfernandes.org>, "Paul E . McKenney" <paulmck@kernel.org> Subject: [PATCH v2 rcu 05/16] rcu: Refactor code a bit in rcu_nocb_do_flush_bypass() Date: Mon, 21 Nov 2022 17:04:10 -0800 Message-Id: <20221122010421.3799681-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> References: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750156613250180528?= X-GMAIL-MSGID: =?utf-8?q?1750156613250180528?= |
Series |
[v2,rcu,01/16] rcu: Simplify rcu_init_nohz() cpumask handling
|
|
Commit Message
Paul E. McKenney
Nov. 22, 2022, 1:04 a.m. UTC
From: "Joel Fernandes (Google)" <joel@joelfernandes.org> This consolidates the code a bit and makes it cleaner. Functionally it is the same. Reported-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> --- kernel/rcu/tree_nocb.h | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-)
Comments
On Mon, Nov 21, 2022 at 05:04:10PM -0800, Paul E. McKenney wrote: > From: "Joel Fernandes (Google)" <joel@joelfernandes.org> > > This consolidates the code a bit and makes it cleaner. Functionally it > is the same. > > Reported-by: Paul E. McKenney <paulmck@kernel.org> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > --- > kernel/rcu/tree_nocb.h | 17 +++++++++-------- > 1 file changed, 9 insertions(+), 8 deletions(-) > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h > index d6e4c076b0515..213daf81c057f 100644 > --- a/kernel/rcu/tree_nocb.h > +++ b/kernel/rcu/tree_nocb.h > @@ -327,10 +327,11 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, > * > * Note that this function always returns true if rhp is NULL. > */ > -static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, > +static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp_in, > unsigned long j, bool lazy) > { > struct rcu_cblist rcl; > + struct rcu_head *rhp = rhp_in; Why that intermediate rhp_in? > > WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp)); > rcu_lockdep_assert_cblist_protected(rdp); > @@ -345,16 +346,16 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, > > /* > * If the new CB requested was a lazy one, queue it onto the main > - * ->cblist so we can take advantage of a sooner grade period. > + * ->cblist so that we can take advantage of the grace-period that will > + * happen regardless. But queue it onto the bypass list first so that > + * the lazy CB is ordered with the existing CBs in the bypass list. > */ > if (lazy && rhp) { > - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, NULL); > - rcu_cblist_enqueue(&rcl, rhp); > - WRITE_ONCE(rdp->lazy_len, 0); > - } else { > - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); > - WRITE_ONCE(rdp->lazy_len, 0); > + rcu_cblist_enqueue(&rdp->nocb_bypass, rhp); > + rhp = NULL; > } > + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); > + WRITE_ONCE(rdp->lazy_len, 0); Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Thanks. > > rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl); > WRITE_ONCE(rdp->nocb_bypass_first, j); > -- > 2.31.1.189.g2e36527f23 >
On Wed, Nov 23, 2022 at 04:59:29PM +0100, Frederic Weisbecker wrote: > On Mon, Nov 21, 2022 at 05:04:10PM -0800, Paul E. McKenney wrote: > > From: "Joel Fernandes (Google)" <joel@joelfernandes.org> > > > > This consolidates the code a bit and makes it cleaner. Functionally it > > is the same. > > > > Reported-by: Paul E. McKenney <paulmck@kernel.org> > > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > --- > > kernel/rcu/tree_nocb.h | 17 +++++++++-------- > > 1 file changed, 9 insertions(+), 8 deletions(-) > > > > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h > > index d6e4c076b0515..213daf81c057f 100644 > > --- a/kernel/rcu/tree_nocb.h > > +++ b/kernel/rcu/tree_nocb.h > > @@ -327,10 +327,11 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, > > * > > * Note that this function always returns true if rhp is NULL. > > */ > > -static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, > > +static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp_in, > > unsigned long j, bool lazy) > > { > > struct rcu_cblist rcl; > > + struct rcu_head *rhp = rhp_in; > > Why that intermediate rhp_in? To avoid modifying the formal parameter, should the original value prove useful, for example, for tracing or debugging. > > WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp)); > > rcu_lockdep_assert_cblist_protected(rdp); > > @@ -345,16 +346,16 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, > > > > /* > > * If the new CB requested was a lazy one, queue it onto the main > > - * ->cblist so we can take advantage of a sooner grade period. > > + * ->cblist so that we can take advantage of the grace-period that will > > + * happen regardless. But queue it onto the bypass list first so that > > + * the lazy CB is ordered with the existing CBs in the bypass list. > > */ > > if (lazy && rhp) { > > - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, NULL); > > - rcu_cblist_enqueue(&rcl, rhp); > > - WRITE_ONCE(rdp->lazy_len, 0); > > - } else { > > - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); > > - WRITE_ONCE(rdp->lazy_len, 0); > > + rcu_cblist_enqueue(&rdp->nocb_bypass, rhp); > > + rhp = NULL; > > } > > + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); > > + WRITE_ONCE(rdp->lazy_len, 0); > > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Thank you! I will apply this on my next rebase. Thanx, Paul > Thanks. > > > > > rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl); > > WRITE_ONCE(rdp->nocb_bypass_first, j); > > -- > > 2.31.1.189.g2e36527f23 > >
> On Nov 23, 2022, at 12:54 PM, Paul E. McKenney <paulmck@kernel.org> wrote: > > On Wed, Nov 23, 2022 at 04:59:29PM +0100, Frederic Weisbecker wrote: >>> On Mon, Nov 21, 2022 at 05:04:10PM -0800, Paul E. McKenney wrote: >>> From: "Joel Fernandes (Google)" <joel@joelfernandes.org> >>> >>> This consolidates the code a bit and makes it cleaner. Functionally it >>> is the same. >>> >>> Reported-by: Paul E. McKenney <paulmck@kernel.org> >>> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> >>> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> >>> --- >>> kernel/rcu/tree_nocb.h | 17 +++++++++-------- >>> 1 file changed, 9 insertions(+), 8 deletions(-) >>> >>> diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h >>> index d6e4c076b0515..213daf81c057f 100644 >>> --- a/kernel/rcu/tree_nocb.h >>> +++ b/kernel/rcu/tree_nocb.h >>> @@ -327,10 +327,11 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, >>> * >>> * Note that this function always returns true if rhp is NULL. >>> */ >>> -static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, >>> +static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp_in, >>> unsigned long j, bool lazy) >>> { >>> struct rcu_cblist rcl; >>> + struct rcu_head *rhp = rhp_in; >> >> Why that intermediate rhp_in? > > To avoid modifying the formal parameter, should the original value prove > useful, for example, for tracing or debugging. So as to not re assign function parameter and introduce bugs down the line (someone reading code thinks they passed a certain rhp but code is using something else later in the function). Thanks. > >>> WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp)); >>> rcu_lockdep_assert_cblist_protected(rdp); >>> @@ -345,16 +346,16 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, >>> >>> /* >>> * If the new CB requested was a lazy one, queue it onto the main >>> - * ->cblist so we can take advantage of a sooner grade period. >>> + * ->cblist so that we can take advantage of the grace-period that will >>> + * happen regardless. But queue it onto the bypass list first so that >>> + * the lazy CB is ordered with the existing CBs in the bypass list. >>> */ >>> if (lazy && rhp) { >>> - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, NULL); >>> - rcu_cblist_enqueue(&rcl, rhp); >>> - WRITE_ONCE(rdp->lazy_len, 0); >>> - } else { >>> - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); >>> - WRITE_ONCE(rdp->lazy_len, 0); >>> + rcu_cblist_enqueue(&rdp->nocb_bypass, rhp); >>> + rhp = NULL; >>> } >>> + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); >>> + WRITE_ONCE(rdp->lazy_len, 0); >> >> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > > Thank you! I will apply this on my next rebase. > > Thanx, Paul > >> Thanks. >> >>> >>> rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl); >>> WRITE_ONCE(rdp->nocb_bypass_first, j); >>> -- >>> 2.31.1.189.g2e36527f23 >>>
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index d6e4c076b0515..213daf81c057f 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -327,10 +327,11 @@ static void wake_nocb_gp_defer(struct rcu_data *rdp, int waketype, * * Note that this function always returns true if rhp is NULL. */ -static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, +static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp_in, unsigned long j, bool lazy) { struct rcu_cblist rcl; + struct rcu_head *rhp = rhp_in; WARN_ON_ONCE(!rcu_rdp_is_offloaded(rdp)); rcu_lockdep_assert_cblist_protected(rdp); @@ -345,16 +346,16 @@ static bool rcu_nocb_do_flush_bypass(struct rcu_data *rdp, struct rcu_head *rhp, /* * If the new CB requested was a lazy one, queue it onto the main - * ->cblist so we can take advantage of a sooner grade period. + * ->cblist so that we can take advantage of the grace-period that will + * happen regardless. But queue it onto the bypass list first so that + * the lazy CB is ordered with the existing CBs in the bypass list. */ if (lazy && rhp) { - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, NULL); - rcu_cblist_enqueue(&rcl, rhp); - WRITE_ONCE(rdp->lazy_len, 0); - } else { - rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); - WRITE_ONCE(rdp->lazy_len, 0); + rcu_cblist_enqueue(&rdp->nocb_bypass, rhp); + rhp = NULL; } + rcu_cblist_flush_enqueue(&rcl, &rdp->nocb_bypass, rhp); + WRITE_ONCE(rdp->lazy_len, 0); rcu_segcblist_insert_pend_cbs(&rdp->cblist, &rcl); WRITE_ONCE(rdp->nocb_bypass_first, j);