Message ID | 20230112075629.1661429-1-qiang1.zhang@intel.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp3750607wrt; Wed, 11 Jan 2023 23:53:58 -0800 (PST) X-Google-Smtp-Source: AMrXdXsJbGvv7P+XPMKvMSBjdChPHhse9D2PhfNe4tH0JFF9C38Lr7+34ad6Fzj1K86amDKezks6 X-Received: by 2002:a05:6a00:1d10:b0:582:ba01:a3c4 with SMTP id a16-20020a056a001d1000b00582ba01a3c4mr30095936pfx.21.1673510038690; Wed, 11 Jan 2023 23:53:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673510038; cv=none; d=google.com; s=arc-20160816; b=YFXyEmFzvCMXSlCMXxdaGJm99SZ8c3ed5Ene4+bl2suDyx9w3ZKd1zZSMR9CiNUNUb A5w2/MIwGyrGdC1zztbL+Vgcg5UWpTWM2BLzA+HUXbe8uAUau2wPihQtfn90OAbaoG+c Wi72X9xQ+SGk7oRsvZvTzBJEp5AFYNG0Ps36NbMTD2NPZn8TE2dmEgGf1kJlYBACebu5 OjblmhGQcnJSBVLL8jhfQv6e62fjJqlkDqcPW2oxBldmY2vJfx0G/crHKEUkMyn3hgPv coQfHoSWeGABt6vbILSoWF+mw4rR/KHgp4oejLL00xajTDfL+yh4+vcsdd4f4su10mub 7Tpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=hyuYnaQ63V9JU8FSpJrQ2zlj+TFT29c4P98dI4QvsY4=; b=PgNOxpDJxcC2Ql6c3XfH0o4xHflCNKWRMMfnA+8rxWO93lqoiz86cCTRWLlJMgzqPF eulILFdwGaDGjuy5GDvvOyIruV8083aDjmsDxB9W+TrXQlEhdWxEVk8rFPcY/xyWe4O4 eFp1+fWRqju6/caLXDQ3Y6jCsuX242/8YzjIbqXMhRIsm7ThXtpF7woTXV0OGoS13pqC gFVQAcwEJ2Ox1lqyl26KbhR40m1LmSYT+PVzESeJGVj8lL2eqBsr82ZVVqZKWxhlj+Pw g6UjU+GwLZlpQW1nKMlJUlMI4Ib10u7FzhzlnQDfg3xQ6Fq4tuLbxbaoN202ngqANAeo MaFA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HCgUh5dH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a38-20020a056a001d2600b0058b398c1a69si5874290pfx.59.2023.01.11.23.53.45; Wed, 11 Jan 2023 23:53:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=HCgUh5dH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235142AbjALHvq (ORCPT <rfc822;zhuangel570@gmail.com> + 99 others); Thu, 12 Jan 2023 02:51:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239589AbjALHv0 (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 12 Jan 2023 02:51:26 -0500 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8BDF13726A; Wed, 11 Jan 2023 23:51:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673509874; x=1705045874; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=Yn345f6A8xT4Ihg6H07aIdh08XFZ516Tsct5gO5ktsE=; b=HCgUh5dH+Dp0z3DRdc5hX427gz3+eG7bc4ljFNPcj9SIH81AVgz8NsB+ r9IQfB+bu2MONbuRjB849djVSfMHfYBkgZFD2DSQ1z9YZeDUlFIH+pTQx zFkPvEyVK3qn91gWvuE8AkbUEIp1nKpMy+EVRkc5gQuQjcZIVSUr4s5nF GAPcDnBaXJwFYWAZaUSVjV6E3bSil8TaFTDIyfygyNVuFPVXdTVLo57m8 VVZgWg0qa53xmWQJO83QpIg+4O/rhY29FM7WfTdCaqm9hevmnlpWaW1aj bgfEdMY+9SEKGV8CFhLPAPNhiwcmdXg1+7D2c9xYG9SqkK467JV07vxv5 A==; X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="307159888" X-IronPort-AV: E=Sophos;i="5.96,319,1665471600"; d="scan'208";a="307159888" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jan 2023 23:51:12 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10586"; a="726213355" X-IronPort-AV: E=Sophos;i="5.96,319,1665471600"; d="scan'208";a="726213355" Received: from zq-optiplex-7090.bj.intel.com ([10.238.156.129]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Jan 2023 23:51:09 -0800 From: Zqiang <qiang1.zhang@intel.com> To: paulmck@kernel.org, frederic@kernel.org, quic_neeraju@quicinc.com, joel@joelfernandes.org Cc: rcu@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] rcu: Fix the start_poll_synchronize_rcu_expedited() be invoked very early Date: Thu, 12 Jan 2023 15:56:29 +0800 Message-Id: <20230112075629.1661429-1-qiang1.zhang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754732216472776876?= X-GMAIL-MSGID: =?utf-8?q?1754802462551689316?= |
Series |
[v2] rcu: Fix the start_poll_synchronize_rcu_expedited() be invoked very early
|
|
Commit Message
Zqiang
Jan. 12, 2023, 7:56 a.m. UTC
Currently, the start_poll_synchronize_rcu_expedited() can be invoked
very early. before rcu_init(), the rcu_data structure's->mynode is not
initialized, if invoke start_poll_synchronize_rcu_expedited() before
rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access.
This commit add boot_exp_seq_poll_rq member to rcu_state structure to
store seq number return by invoke start_poll_synchronize_rcu_expedited()
very early.
Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives")
Signed-off-by: Zqiang <qiang1.zhang@intel.com>
---
kernel/rcu/tree.c | 3 ++-
kernel/rcu/tree.h | 1 +
kernel/rcu/tree_exp.h | 6 ++++--
3 files changed, 7 insertions(+), 3 deletions(-)
Comments
On Thu, Jan 12, 2023 at 03:56:29PM +0800, Zqiang wrote: > Currently, the start_poll_synchronize_rcu_expedited() can be invoked > very early. before rcu_init(), the rcu_data structure's->mynode is not > initialized, if invoke start_poll_synchronize_rcu_expedited() before > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > store seq number return by invoke start_poll_synchronize_rcu_expedited() > very early. > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > Signed-off-by: Zqiang <qiang1.zhang@intel.com> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Just a nit below: > --- > kernel/rcu/tree.c | 3 ++- > kernel/rcu/tree.h | 1 + > kernel/rcu/tree_exp.h | 6 ++++-- > 3 files changed, 7 insertions(+), 3 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 63545d79da51..12f0891ce7f4 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -92,6 +92,7 @@ static struct rcu_state rcu_state = { > .exp_mutex = __MUTEX_INITIALIZER(rcu_state.exp_mutex), > .exp_wake_mutex = __MUTEX_INITIALIZER(rcu_state.exp_wake_mutex), > .ofl_lock = __ARCH_SPIN_LOCK_UNLOCKED, > + .boot_exp_seq_poll_rq = RCU_GET_STATE_COMPLETED, > }; > > /* Dump rcu_node combining tree at boot to verify correct setup. */ > @@ -4938,7 +4939,7 @@ void __init rcu_init(void) > qovld_calc = qovld; > > // Kick-start any polled grace periods that started early. > - if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) > + if (!(rcu_state.boot_exp_seq_poll_rq & 0x1)) This can be "if (!(rcu_state.boot_exp_seq_poll_rq == RCU_GET_STATE_COMPLETED))" > (void)start_poll_synchronize_rcu_expedited(); > > rcu_test_sync_prims(); > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h > index 192536916f9a..ae50ca6853ad 100644 > --- a/kernel/rcu/tree.h > +++ b/kernel/rcu/tree.h > @@ -397,6 +397,7 @@ struct rcu_state { > /* Synchronize offline with */ > /* GP pre-initialization. */ > int nocb_is_setup; /* nocb is setup from boot */ > + unsigned long boot_exp_seq_poll_rq; A comment on the right can mention: "/* exp seq poll request before rcu_init() */" Thanks! > }; > > /* Values for rcu_state structure's gp_flags field. */ > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > index 956cd459ba7f..1b35a1e233d9 100644 > --- a/kernel/rcu/tree_exp.h > +++ b/kernel/rcu/tree_exp.h > @@ -1068,9 +1068,11 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > if (rcu_init_invoked()) > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > if (!poll_state_synchronize_rcu(s)) { > - rnp->exp_seq_poll_rq = s; > - if (rcu_init_invoked()) > + if (rcu_init_invoked()) { > + rnp->exp_seq_poll_rq = s; > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); > + } else > + rcu_state.boot_exp_seq_poll_rq = s; > } > if (rcu_init_invoked()) > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags); > -- > 2.25.1 >
> On Jan 12, 2023, at 2:51 AM, Zqiang <qiang1.zhang@intel.com> wrote: > > ο»ΏCurrently, the start_poll_synchronize_rcu_expedited() can be invoked > very early. before rcu_init(), the rcu_data structure's->mynode is not > initialized, if invoke start_poll_synchronize_rcu_expedited() before > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > store seq number return by invoke start_poll_synchronize_rcu_expedited() > very early. > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > Signed-off-by: Zqiang <qiang1.zhang@intel.com> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Thanks. > --- > kernel/rcu/tree.c | 3 ++- > kernel/rcu/tree.h | 1 + > kernel/rcu/tree_exp.h | 6 ++++-- > 3 files changed, 7 insertions(+), 3 deletions(-) > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 63545d79da51..12f0891ce7f4 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -92,6 +92,7 @@ static struct rcu_state rcu_state = { > .exp_mutex = __MUTEX_INITIALIZER(rcu_state.exp_mutex), > .exp_wake_mutex = __MUTEX_INITIALIZER(rcu_state.exp_wake_mutex), > .ofl_lock = __ARCH_SPIN_LOCK_UNLOCKED, > + .boot_exp_seq_poll_rq = RCU_GET_STATE_COMPLETED, > }; > > /* Dump rcu_node combining tree at boot to verify correct setup. */ > @@ -4938,7 +4939,7 @@ void __init rcu_init(void) > qovld_calc = qovld; > > // Kick-start any polled grace periods that started early. > - if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) > + if (!(rcu_state.boot_exp_seq_poll_rq & 0x1)) > (void)start_poll_synchronize_rcu_expedited(); > > rcu_test_sync_prims(); > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h > index 192536916f9a..ae50ca6853ad 100644 > --- a/kernel/rcu/tree.h > +++ b/kernel/rcu/tree.h > @@ -397,6 +397,7 @@ struct rcu_state { > /* Synchronize offline with */ > /* GP pre-initialization. */ > int nocb_is_setup; /* nocb is setup from boot */ > + unsigned long boot_exp_seq_poll_rq; > }; > > /* Values for rcu_state structure's gp_flags field. */ > diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > index 956cd459ba7f..1b35a1e233d9 100644 > --- a/kernel/rcu/tree_exp.h > +++ b/kernel/rcu/tree_exp.h > @@ -1068,9 +1068,11 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > if (rcu_init_invoked()) > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > if (!poll_state_synchronize_rcu(s)) { > - rnp->exp_seq_poll_rq = s; > - if (rcu_init_invoked()) > + if (rcu_init_invoked()) { > + rnp->exp_seq_poll_rq = s; > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); > + } else > + rcu_state.boot_exp_seq_poll_rq = s; > } > if (rcu_init_invoked()) > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags); > -- > 2.25.1 >
On Thu, Jan 12, 2023 at 03:56:29PM +0800, Zqiang wrote: > Currently, the start_poll_synchronize_rcu_expedited() can be invoked > very early. before rcu_init(), the rcu_data structure's->mynode is not > initialized, if invoke start_poll_synchronize_rcu_expedited() before > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > store seq number return by invoke start_poll_synchronize_rcu_expedited() > very early. > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > Signed-off-by: Zqiang <qiang1.zhang@intel.com> First off, excellent catch, Zqiang!!! And thank you for Frederic and Joel for your reviews. But I believe that this can be simplified, for example, as shown in the (untested) patch below. Thoughts? And yes, I did presumptuously add Frederic's and Joel's reviews. Please let me know if you disagree, and if so what different approach you would prefer. (Though of course simple disagreement is sufficient for me to remove your tag. Not holding you hostage for improvements, not yet, anyway!) Thanx, Paul ------------------------------------------------------------------------ commit e05af5cb3858e669c9e6b70e0aca708cc70457da Author: Zqiang <qiang1.zhang@intel.com> Date: Thu Jan 12 10:48:29 2023 -0800 rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early According to the commit log of the patch that added it to the kernel, start_poll_synchronize_rcu_expedited() can be invoked very early, as in long before rcu_init() has been invoked. But before rcu_init(), the rcu_data structure's ->mynode field has not yet been initialized. This means that the start_poll_synchronize_rcu_expedited() function's attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq field will result in a segmentation fault. This commit therefore causes start_poll_synchronize_rcu_expedited() to set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' rcu_data structures' ->mynode fields. It also removes the check from the rcu_init() function so that start_poll_synchronize_rcu_expedited( is unconditionally invoked. Yes, this might result in an unnecessary boot-time grace period, but this is down in the noise. Besides, there only has to be one call_rcu() invoked prior to scheduler initialization to make this boot-time grace period necessary. Signed-off-by: Zqiang <qiang1.zhang@intel.com> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 63545d79da51c..f2e3a23778c06 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4937,9 +4937,8 @@ void __init rcu_init(void) else qovld_calc = qovld; - // Kick-start any polled grace periods that started early. - if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) - (void)start_poll_synchronize_rcu_expedited(); + // Kick-start in case any polled grace periods started early. + (void)start_poll_synchronize_rcu_expedited(); rcu_test_sync_prims(); } diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 956cd459ba7f3..3b7abb58157df 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) if (rcu_init_invoked()) raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); if (!poll_state_synchronize_rcu(s)) { - rnp->exp_seq_poll_rq = s; - if (rcu_init_invoked()) + if (rcu_init_invoked()) { + rnp->exp_seq_poll_rq = s; queue_work(rcu_gp_wq, &rnp->exp_poll_wq); + } } if (rcu_init_invoked()) raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
On Thu, Jan 12, 2023 at 03:56:29PM +0800, Zqiang wrote: > Currently, the start_poll_synchronize_rcu_expedited() can be invoked > very early. before rcu_init(), the rcu_data structure's->mynode is not > initialized, if invoke start_poll_synchronize_rcu_expedited() before > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > store seq number return by invoke start_poll_synchronize_rcu_expedited() > very early. > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > >First off, excellent catch, Zqiang!!! > >And thank you for Frederic and Joel for your reviews. > >But I believe that this can be simplified, for example, as shown in >the (untested) patch below. > >Thoughts? Agree, thanks for wordsmithed π. > >And yes, I did presumptuously add Frederic's and Joel's reviews. >Please let me know if you disagree, and if so what different approach >you would prefer. (Though of course simple disagreement is sufficient >for me to remove your tag. Not holding you hostage for improvements, >not yet, anyway!) > > Thanx, Paul > >------------------------------------------------------------------------ > >commit e05af5cb3858e669c9e6b70e0aca708cc70457da >Author: Zqiang <qiang1.zhang@intel.com> >Date: Thu Jan 12 10:48:29 2023 -0800 > > rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early > > According to the commit log of the patch that added it to the kernel, > start_poll_synchronize_rcu_expedited() can be invoked very early, as > in long before rcu_init() has been invoked. But before rcu_init(), > the rcu_data structure's ->mynode field has not yet been initialized. > This means that the start_poll_synchronize_rcu_expedited() function's > attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq > field will result in a segmentation fault. > > This commit therefore causes start_poll_synchronize_rcu_expedited() to > set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' > rcu_data structures' ->mynode fields. It also removes the check from > the rcu_init() function so that start_poll_synchronize_rcu_expedited( > is unconditionally invoked. Yes, this might result in an unnecessary > boot-time grace period, but this is down in the noise. Besides, there > only has to be one call_rcu() invoked prior to scheduler initialization > to make this boot-time grace period necessary. > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > >diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c >index 63545d79da51c..f2e3a23778c06 100644 >--- a/kernel/rcu/tree.c >+++ b/kernel/rcu/tree.c >@@ -4937,9 +4937,8 @@ void __init rcu_init(void) > else > qovld_calc = qovld; > >- // Kick-start any polled grace periods that started early. >- if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) >- (void)start_poll_synchronize_rcu_expedited(); >+ // Kick-start in case any polled grace periods started early. >+ (void)start_poll_synchronize_rcu_expedited(); > > rcu_test_sync_prims(); > } >diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h >index 956cd459ba7f3..3b7abb58157df 100644 >--- a/kernel/rcu/tree_exp.h >+++ b/kernel/rcu/tree_exp.h >@@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > if (rcu_init_invoked()) > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > if (!poll_state_synchronize_rcu(s)) { >- rnp->exp_seq_poll_rq = s; >- if (rcu_init_invoked()) >+ if (rcu_init_invoked()) { >+ rnp->exp_seq_poll_rq = s; > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); >+ } > } > if (rcu_init_invoked()) > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
> Currently, the start_poll_synchronize_rcu_expedited() can be invoked > very early. before rcu_init(), the rcu_data structure's->mynode is not > initialized, if invoke start_poll_synchronize_rcu_expedited() before > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > store seq number return by invoke start_poll_synchronize_rcu_expedited() > very early. > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > >First off, excellent catch, Zqiang!!! > >And thank you for Frederic and Joel for your reviews. > >But I believe that this can be simplified, for example, as shown in >the (untested) patch below. > >Thoughts? > >Agree, thanks for wordsmithed π. > > >And yes, I did presumptuously add Frederic's and Joel's reviews. >Please let me know if you disagree, and if so what different approach >you would prefer. (Though of course simple disagreement is sufficient >for me to remove your tag. Not holding you hostage for improvements, >not yet, anyway!) > > Thanx, Paul > >------------------------------------------------------------------------ > >commit e05af5cb3858e669c9e6b70e0aca708cc70457da >Author: Zqiang <qiang1.zhang@intel.com> >Date: Thu Jan 12 10:48:29 2023 -0800 > > rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early > > According to the commit log of the patch that added it to the kernel, > start_poll_synchronize_rcu_expedited() can be invoked very early, as > in long before rcu_init() has been invoked. But before rcu_init(), > the rcu_data structure's ->mynode field has not yet been initialized. > This means that the start_poll_synchronize_rcu_expedited() function's > attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq > field will result in a segmentation fault. > > This commit therefore causes start_poll_synchronize_rcu_expedited() to > set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' > rcu_data structures' ->mynode fields. It also removes the check from > the rcu_init() function so that start_poll_synchronize_rcu_expedited( > is unconditionally invoked. Yes, this might result in an unnecessary > boot-time grace period, but this is down in the noise. Besides, there > only has to be one call_rcu() invoked prior to scheduler initialization > to make this boot-time grace period necessary. A little confused, why call_rcu() invoked prior to scheduler initialization to make this boot-time grace period necessary π? Thanks Zqiang > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > >diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c >index 63545d79da51c..f2e3a23778c06 100644 >--- a/kernel/rcu/tree.c >+++ b/kernel/rcu/tree.c >@@ -4937,9 +4937,8 @@ void __init rcu_init(void) > else > qovld_calc = qovld; > >- // Kick-start any polled grace periods that started early. >- if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) >- (void)start_poll_synchronize_rcu_expedited(); >+ // Kick-start in case any polled grace periods started early. >+ (void)start_poll_synchronize_rcu_expedited(); > > rcu_test_sync_prims(); > } >diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h >index 956cd459ba7f3..3b7abb58157df 100644 >--- a/kernel/rcu/tree_exp.h >+++ b/kernel/rcu/tree_exp.h >@@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > if (rcu_init_invoked()) > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > if (!poll_state_synchronize_rcu(s)) { >- rnp->exp_seq_poll_rq = s; >- if (rcu_init_invoked()) >+ if (rcu_init_invoked()) { >+ rnp->exp_seq_poll_rq = s; > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); >+ } > } > if (rcu_init_invoked()) > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
> Currently, the start_poll_synchronize_rcu_expedited() can be invoked > very early. before rcu_init(), the rcu_data structure's->mynode is not > initialized, if invoke start_poll_synchronize_rcu_expedited() before > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > store seq number return by invoke start_poll_synchronize_rcu_expedited() > very early. > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > >First off, excellent catch, Zqiang!!! > >And thank you for Frederic and Joel for your reviews. > >But I believe that this can be simplified, for example, as shown in >the (untested) patch below. > >Thoughts? > >Agree, thanks for wordsmithed π. > > >And yes, I did presumptuously add Frederic's and Joel's reviews. >Please let me know if you disagree, and if so what different approach >you would prefer. (Though of course simple disagreement is sufficient >for me to remove your tag. Not holding you hostage for improvements, >not yet, anyway!) > > Thanx, Paul > >------------------------------------------------------------------------ > >commit e05af5cb3858e669c9e6b70e0aca708cc70457da >Author: Zqiang <qiang1.zhang@intel.com> >Date: Thu Jan 12 10:48:29 2023 -0800 > > rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early > > According to the commit log of the patch that added it to the kernel, > start_poll_synchronize_rcu_expedited() can be invoked very early, as > in long before rcu_init() has been invoked. But before rcu_init(), > the rcu_data structure's ->mynode field has not yet been initialized. > This means that the start_poll_synchronize_rcu_expedited() function's > attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq > field will result in a segmentation fault. > > This commit therefore causes start_poll_synchronize_rcu_expedited() to > set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' > rcu_data structures' ->mynode fields. It also removes the check from > the rcu_init() function so that start_poll_synchronize_rcu_expedited( > is unconditionally invoked. Yes, this might result in an unnecessary > boot-time grace period, but this is down in the noise. Besides, there > only has to be one call_rcu() invoked prior to scheduler initialization > to make this boot-time grace period necessary. A little confused, why call_rcu() invoked prior to scheduler initialization to make this boot-time grace period necessary π? affects call_rcu() is the normal grace period, but we are polling is expedited grace period. Thanks Zqiang > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > >diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c >index 63545d79da51c..f2e3a23778c06 100644 >--- a/kernel/rcu/tree.c >+++ b/kernel/rcu/tree.c >@@ -4937,9 +4937,8 @@ void __init rcu_init(void) > else > qovld_calc = qovld; > >- // Kick-start any polled grace periods that started early. >- if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) >- (void)start_poll_synchronize_rcu_expedited(); >+ // Kick-start in case any polled grace periods started early. >+ (void)start_poll_synchronize_rcu_expedited(); > > rcu_test_sync_prims(); > } >diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h >index 956cd459ba7f3..3b7abb58157df 100644 >--- a/kernel/rcu/tree_exp.h >+++ b/kernel/rcu/tree_exp.h >@@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > if (rcu_init_invoked()) > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > if (!poll_state_synchronize_rcu(s)) { >- rnp->exp_seq_poll_rq = s; >- if (rcu_init_invoked()) >+ if (rcu_init_invoked()) { >+ rnp->exp_seq_poll_rq = s; > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); >+ } > } > if (rcu_init_invoked()) > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
On Fri, Jan 13, 2023 at 12:10:47PM +0000, Zhang, Qiang1 wrote: > > > Currently, the start_poll_synchronize_rcu_expedited() can be invoked > > very early. before rcu_init(), the rcu_data structure's->mynode is not > > initialized, if invoke start_poll_synchronize_rcu_expedited() before > > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > > store seq number return by invoke start_poll_synchronize_rcu_expedited() > > very early. > > > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > > >First off, excellent catch, Zqiang!!! > > > >And thank you for Frederic and Joel for your reviews. > > > >But I believe that this can be simplified, for example, as shown in > >the (untested) patch below. > > > >Thoughts? > > > >Agree, thanks for wordsmithed π. > > > > > >And yes, I did presumptuously add Frederic's and Joel's reviews. > >Please let me know if you disagree, and if so what different approach > >you would prefer. (Though of course simple disagreement is sufficient > >for me to remove your tag. Not holding you hostage for improvements, > >not yet, anyway!) > > > > Thanx, Paul > > > >------------------------------------------------------------------------ > > > >commit e05af5cb3858e669c9e6b70e0aca708cc70457da > >Author: Zqiang <qiang1.zhang@intel.com> > >Date: Thu Jan 12 10:48:29 2023 -0800 > > > > rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early > > > > According to the commit log of the patch that added it to the kernel, > > start_poll_synchronize_rcu_expedited() can be invoked very early, as > > in long before rcu_init() has been invoked. But before rcu_init(), > > the rcu_data structure's ->mynode field has not yet been initialized. > > This means that the start_poll_synchronize_rcu_expedited() function's > > attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq > > field will result in a segmentation fault. > > > > This commit therefore causes start_poll_synchronize_rcu_expedited() to > > set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' > > rcu_data structures' ->mynode fields. It also removes the check from > > the rcu_init() function so that start_poll_synchronize_rcu_expedited( > > is unconditionally invoked. Yes, this might result in an unnecessary > > boot-time grace period, but this is down in the noise. Besides, there > > only has to be one call_rcu() invoked prior to scheduler initialization > > to make this boot-time grace period necessary. > > A little confused, why call_rcu() invoked prior to scheduler initialization to make this boot-time > grace period necessary π? Because then there will be a callback queued that will require a grace period to run anyway. Or maybe you are asking if those callbacks will really be able to use that first grace period? Thanx, Paul > Thanks > Zqiang > > > > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > > >diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > >index 63545d79da51c..f2e3a23778c06 100644 > >--- a/kernel/rcu/tree.c > >+++ b/kernel/rcu/tree.c > >@@ -4937,9 +4937,8 @@ void __init rcu_init(void) > > else > > qovld_calc = qovld; > > > >- // Kick-start any polled grace periods that started early. > >- if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) > >- (void)start_poll_synchronize_rcu_expedited(); > >+ // Kick-start in case any polled grace periods started early. > >+ (void)start_poll_synchronize_rcu_expedited(); > > > > rcu_test_sync_prims(); > > } > >diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > >index 956cd459ba7f3..3b7abb58157df 100644 > >--- a/kernel/rcu/tree_exp.h > >+++ b/kernel/rcu/tree_exp.h > >@@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > > if (rcu_init_invoked()) > > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > > if (!poll_state_synchronize_rcu(s)) { > >- rnp->exp_seq_poll_rq = s; > >- if (rcu_init_invoked()) > >+ if (rcu_init_invoked()) { > >+ rnp->exp_seq_poll_rq = s; > > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); > >+ } > > } > > if (rcu_init_invoked()) > > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
On Fri, Jan 13, 2023 at 12:10:47PM +0000, Zhang, Qiang1 wrote: > > > Currently, the start_poll_synchronize_rcu_expedited() can be invoked > > very early. before rcu_init(), the rcu_data structure's->mynode is not > > initialized, if invoke start_poll_synchronize_rcu_expedited() before > > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > > store seq number return by invoke start_poll_synchronize_rcu_expedited() > > very early. > > > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > > >First off, excellent catch, Zqiang!!! > > > >And thank you for Frederic and Joel for your reviews. > > > >But I believe that this can be simplified, for example, as shown in > >the (untested) patch below. > > > >Thoughts? > > > >Agree, thanks for wordsmithed π. > > > > > >And yes, I did presumptuously add Frederic's and Joel's reviews. > >Please let me know if you disagree, and if so what different approach > >you would prefer. (Though of course simple disagreement is sufficient > >for me to remove your tag. Not holding you hostage for improvements, > >not yet, anyway!) > > > > Thanx, Paul > > > >------------------------------------------------------------------------ > > > >commit e05af5cb3858e669c9e6b70e0aca708cc70457da > >Author: Zqiang <qiang1.zhang@intel.com> > >Date: Thu Jan 12 10:48:29 2023 -0800 > > > > rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early > > > > According to the commit log of the patch that added it to the kernel, > > start_poll_synchronize_rcu_expedited() can be invoked very early, as > > in long before rcu_init() has been invoked. But before rcu_init(), > > the rcu_data structure's ->mynode field has not yet been initialized. > > This means that the start_poll_synchronize_rcu_expedited() function's > > attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq > > field will result in a segmentation fault. > > > > This commit therefore causes start_poll_synchronize_rcu_expedited() to > > set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' > > rcu_data structures' ->mynode fields. It also removes the check from > > the rcu_init() function so that start_poll_synchronize_rcu_expedited( > > is unconditionally invoked. Yes, this might result in an unnecessary > > boot-time grace period, but this is down in the noise. Besides, there > > only has to be one call_rcu() invoked prior to scheduler initialization > > to make this boot-time grace period necessary. > > A little confused, why call_rcu() invoked prior to scheduler initialization to make this boot-time > grace period necessary π? > >Because then there will be a callback queued that will require a grace >period to run anyway. > >Or maybe you are asking if those callbacks will really be able to use >that first grace period? Yes, because even if we queue work to rcu_gp_wq workqueue, this also requires us to wait until the workqueue_init() is invoked, our work can be execute. and also need to wait for the rcu_gp kthread to be created, after this, our first grace period can begin, the callbacks has the opportunity to be called. the call_rcu() require a grace period, but we require is expedited grace period. Thanks Zqiang > > Thanx, Paul > Thanks > Zqiang > > > > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > > >diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > >index 63545d79da51c..f2e3a23778c06 100644 > >--- a/kernel/rcu/tree.c > >+++ b/kernel/rcu/tree.c > >@@ -4937,9 +4937,8 @@ void __init rcu_init(void) > > else > > qovld_calc = qovld; > > > >- // Kick-start any polled grace periods that started early. > >- if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) > >- (void)start_poll_synchronize_rcu_expedited(); > >+ // Kick-start in case any polled grace periods started early. > >+ (void)start_poll_synchronize_rcu_expedited(); > > > > rcu_test_sync_prims(); > > } > >diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > >index 956cd459ba7f3..3b7abb58157df 100644 > >--- a/kernel/rcu/tree_exp.h > >+++ b/kernel/rcu/tree_exp.h > >@@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > > if (rcu_init_invoked()) > > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > > if (!poll_state_synchronize_rcu(s)) { > >- rnp->exp_seq_poll_rq = s; > >- if (rcu_init_invoked()) > >+ if (rcu_init_invoked()) { > >+ rnp->exp_seq_poll_rq = s; > > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); > >+ } > > } > > if (rcu_init_invoked()) > > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
On Fri, Jan 13, 2023 at 03:02:54PM +0000, Zhang, Qiang1 wrote: > On Fri, Jan 13, 2023 at 12:10:47PM +0000, Zhang, Qiang1 wrote: > > > > > Currently, the start_poll_synchronize_rcu_expedited() can be invoked > > > very early. before rcu_init(), the rcu_data structure's->mynode is not > > > initialized, if invoke start_poll_synchronize_rcu_expedited() before > > > rcu_init(), will trigger a null rcu_node structure's->exp_seq_poll access. > > > > > > This commit add boot_exp_seq_poll_rq member to rcu_state structure to > > > store seq number return by invoke start_poll_synchronize_rcu_expedited() > > > very early. > > > > > > Fixes: d96c52fe4907 ("rcu: Add polled expedited grace-period primitives") > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > > > > >First off, excellent catch, Zqiang!!! > > > > > >And thank you for Frederic and Joel for your reviews. > > > > > >But I believe that this can be simplified, for example, as shown in > > >the (untested) patch below. > > > > > >Thoughts? > > > > > >Agree, thanks for wordsmithed π. > > > > > > > > >And yes, I did presumptuously add Frederic's and Joel's reviews. > > >Please let me know if you disagree, and if so what different approach > > >you would prefer. (Though of course simple disagreement is sufficient > > >for me to remove your tag. Not holding you hostage for improvements, > > >not yet, anyway!) > > > > > > Thanx, Paul > > > > > >------------------------------------------------------------------------ > > > > > >commit e05af5cb3858e669c9e6b70e0aca708cc70457da > > >Author: Zqiang <qiang1.zhang@intel.com> > > >Date: Thu Jan 12 10:48:29 2023 -0800 > > > > > > rcu: Permit start_poll_synchronize_rcu_expedited() to be invoked early > > > > > > According to the commit log of the patch that added it to the kernel, > > > start_poll_synchronize_rcu_expedited() can be invoked very early, as > > > in long before rcu_init() has been invoked. But before rcu_init(), > > > the rcu_data structure's ->mynode field has not yet been initialized. > > > This means that the start_poll_synchronize_rcu_expedited() function's > > > attempt to set the CPU's leaf rcu_node structure's ->exp_seq_poll_rq > > > field will result in a segmentation fault. > > > > > > This commit therefore causes start_poll_synchronize_rcu_expedited() to > > > set ->exp_seq_poll_rq only after rcu_init() has initialized all CPUs' > > > rcu_data structures' ->mynode fields. It also removes the check from > > > the rcu_init() function so that start_poll_synchronize_rcu_expedited( > > > is unconditionally invoked. Yes, this might result in an unnecessary > > > boot-time grace period, but this is down in the noise. Besides, there > > > only has to be one call_rcu() invoked prior to scheduler initialization > > > to make this boot-time grace period necessary. > > > > A little confused, why call_rcu() invoked prior to scheduler initialization to make this boot-time > > grace period necessary π? > > > >Because then there will be a callback queued that will require a grace > >period to run anyway. > > > >Or maybe you are asking if those callbacks will really be able to use > >that first grace period? > > Yes, because even if we queue work to rcu_gp_wq workqueue, this also requires us to wait until > the workqueue_init() is invoked, our work can be execute. and also need to wait for the > rcu_gp kthread to be created, after this, our first grace period can begin, the callbacks has the > opportunity to be called. the call_rcu() require a grace period, but we require is expedited grace period. Good catch, thank you! I will update the commit log accordingly. Thanx, Paul > Thanks > Zqiang > > > > > Thanx, Paul > > > Thanks > > Zqiang > > > > > > > > > > Signed-off-by: Zqiang <qiang1.zhang@intel.com> > > > Reviewed-by: Frederic Weisbecker <frederic@kernel.org> > > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > > > > >diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > >index 63545d79da51c..f2e3a23778c06 100644 > > >--- a/kernel/rcu/tree.c > > >+++ b/kernel/rcu/tree.c > > >@@ -4937,9 +4937,8 @@ void __init rcu_init(void) > > > else > > > qovld_calc = qovld; > > > > > >- // Kick-start any polled grace periods that started early. > > >- if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) > > >- (void)start_poll_synchronize_rcu_expedited(); > > >+ // Kick-start in case any polled grace periods started early. > > >+ (void)start_poll_synchronize_rcu_expedited(); > > > > > > rcu_test_sync_prims(); > > > } > > >diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h > > >index 956cd459ba7f3..3b7abb58157df 100644 > > >--- a/kernel/rcu/tree_exp.h > > >+++ b/kernel/rcu/tree_exp.h > > >@@ -1068,9 +1068,10 @@ unsigned long start_poll_synchronize_rcu_expedited(void) > > > if (rcu_init_invoked()) > > > raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); > > > if (!poll_state_synchronize_rcu(s)) { > > >- rnp->exp_seq_poll_rq = s; > > >- if (rcu_init_invoked()) > > >+ if (rcu_init_invoked()) { > > >+ rnp->exp_seq_poll_rq = s; > > > queue_work(rcu_gp_wq, &rnp->exp_poll_wq); > > >+ } > > > } > > > if (rcu_init_invoked()) > > > raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 63545d79da51..12f0891ce7f4 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -92,6 +92,7 @@ static struct rcu_state rcu_state = { .exp_mutex = __MUTEX_INITIALIZER(rcu_state.exp_mutex), .exp_wake_mutex = __MUTEX_INITIALIZER(rcu_state.exp_wake_mutex), .ofl_lock = __ARCH_SPIN_LOCK_UNLOCKED, + .boot_exp_seq_poll_rq = RCU_GET_STATE_COMPLETED, }; /* Dump rcu_node combining tree at boot to verify correct setup. */ @@ -4938,7 +4939,7 @@ void __init rcu_init(void) qovld_calc = qovld; // Kick-start any polled grace periods that started early. - if (!(per_cpu_ptr(&rcu_data, cpu)->mynode->exp_seq_poll_rq & 0x1)) + if (!(rcu_state.boot_exp_seq_poll_rq & 0x1)) (void)start_poll_synchronize_rcu_expedited(); rcu_test_sync_prims(); diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h index 192536916f9a..ae50ca6853ad 100644 --- a/kernel/rcu/tree.h +++ b/kernel/rcu/tree.h @@ -397,6 +397,7 @@ struct rcu_state { /* Synchronize offline with */ /* GP pre-initialization. */ int nocb_is_setup; /* nocb is setup from boot */ + unsigned long boot_exp_seq_poll_rq; }; /* Values for rcu_state structure's gp_flags field. */ diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 956cd459ba7f..1b35a1e233d9 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -1068,9 +1068,11 @@ unsigned long start_poll_synchronize_rcu_expedited(void) if (rcu_init_invoked()) raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); if (!poll_state_synchronize_rcu(s)) { - rnp->exp_seq_poll_rq = s; - if (rcu_init_invoked()) + if (rcu_init_invoked()) { + rnp->exp_seq_poll_rq = s; queue_work(rcu_gp_wq, &rnp->exp_poll_wq); + } else + rcu_state.boot_exp_seq_poll_rq = s; } if (rcu_init_invoked()) raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags);