Message ID | 20231212174817.11919-3-neeraj.iitr10@gmail.com |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp7891860vqy; Tue, 12 Dec 2023 09:49:01 -0800 (PST) X-Google-Smtp-Source: AGHT+IEuM+65y9aM1qYbfXAAVx16hj+aEZZJkuA3CRL28gkt+jMCoh/ZOCCipxOBnu+YtUBZ2BeJ X-Received: by 2002:a17:902:f542:b0:1d0:aee3:59fd with SMTP id h2-20020a170902f54200b001d0aee359fdmr3376136plf.55.1702403341532; Tue, 12 Dec 2023 09:49:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702403341; cv=none; d=google.com; s=arc-20160816; b=jX6EbjU0tGaOZMB/niEMKmwDgyXvehwbXNIllrBg1RbYY/L/JzlcqtMwXYV8J3gQxy ujNhxxGTzxOxCZ3+r5zfo7ODInD/q7taVdvC460XSh3j84L4MV8kJlshDC++VWZ1ywId mV5avKNKB9Y2uD9d/KW17VA7Q03hF4h1WL3q2Cmn1BTy6/rq/H3aGiu1gNyS3+/B5j2C sc/Y8gm1EQ28D33I9wrpt7sQYl33JEX0JK1lYJ5m/0iz2duo6RQILnVadRJtAmwlODmL gANLhZ/lTeZNGG6fUHCC8E10/TsH39zYRRzCAskMloRci4WMLIldT1ckDJXFcpNca1TO fotg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=btFMgkxywo8icROu0D3fMxutyCln5kSOeODGB3sFA/8=; fh=pzbLb0fcCHx9ApyLXWRXZlvnkZ5E+WgEJFHBfDnIwGU=; b=lzGkHXDji6hrGEyKyiHUnZ/zhxSu8qPIOD5HdppQY+wNAIsWMIBysY0LyssUAgpdPd qCc9FTynYiYbo16iVo1c/le49AOq8SR5S0kt1sh+9Kg2/yDCyXXPrGpbOKbJVm9/fvYX 1v/q6pQ4J4cEb/j4ATD2y10T5Mpw1J2/9fjhKOxpY0pgmAROQpTZMmbEsdg0lwLQuI65 NDqR6ryubxNMJcWlaDsDgzH9zAW601fuxS8R1W23vEYSwfPx6nkEgVG7gwG3+uo5QX+L HJLIpb0zfC14pcH4ly4k32eQsLcVXMuVtT1Llid7yJxKxLjwW50aZ/FaYCb0vCqwXcxz LoXg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Hovtu6L6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id jf7-20020a170903268700b001d085a5fe37si7940115plb.582.2023.12.12.09.49.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 09:49:01 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=Hovtu6L6; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 5B442804868C; Tue, 12 Dec 2023 09:49:00 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377021AbjLLRsu (ORCPT <rfc822;dexuan.linux@gmail.com> + 99 others); Tue, 12 Dec 2023 12:48:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377026AbjLLRsr (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Tue, 12 Dec 2023 12:48:47 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4385EAB; Tue, 12 Dec 2023 09:48:54 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-1d0897e99e0so34892675ad.3; Tue, 12 Dec 2023 09:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702403333; x=1703008133; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=btFMgkxywo8icROu0D3fMxutyCln5kSOeODGB3sFA/8=; b=Hovtu6L6rdxL/dcstSbje1s3PDnyAtd5s5WHkRMQXIWgA6OvB9ayCArB+fABzzQakX sxkfYNaEELD8X/rWfRIoTEAvNJx1iL3lq4QqcXSZG8z09TIuB73Uf+KxP4CPto/Vw4EQ 4gEDKWOKwKoDBKuiG+y4+bO3Q56gib+kW7cAwDGAKCsWKNLyrdAccjzVIGfKKJElot31 UxnyEmuLhCiuBlQ/hh81n0c8DMIk6h3wy5ZGQjpuRhv0zmB4KlvwYY0fGCBJNi3iYnQc ve6T7nSZDYVh9nnpQRv/F5J155+6Kcxh31RCJj8ymDnXKDWcUJm2+AhyTXKraHJoDseo MI3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702403333; x=1703008133; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=btFMgkxywo8icROu0D3fMxutyCln5kSOeODGB3sFA/8=; b=c4ytcBcjTSqlOCPrZbWCiqW8wT60XzKY1lMhP9myvFAbpuPhGYZdwtvEbYRuO087vB LxYyMF4o3+pjP9JiB99zBs/XUb/+ArSTEy2N5N/Vxw6XKHEHCJ/2B1XfaksTffddW/xh kSrQivGR5VRQoeDk4fRa50a8cb4tWuYNC9oc7Vgi498aAPfr5YqHFk5RAX7Wlh8l2CuN M4mB3UtHMSAeqAh+jBKjVlwmhZJ0Qfm7OWCL3LfrtLRugpF02RXCu2GB8j5WEmjcYXQ2 Uixey+cSf//LbYWys+lwREBZlQmFcZdcmTseRYrEMUWMSQ3D3JVwEN7UK7hC+Htx8PRq 5IRQ== X-Gm-Message-State: AOJu0Ywv2XfNqAaLVz9yYxvzHYRzxYwrqda1qkzz149xsWtk64CwYfbf kn0xNa+turxcUFr9BPMy1BWjjgHe/A84hXXB X-Received: by 2002:a17:902:efc5:b0:1d0:cc54:85e2 with SMTP id ja5-20020a170902efc500b001d0cc5485e2mr3010887plb.21.1702403333073; Tue, 12 Dec 2023 09:48:53 -0800 (PST) Received: from localhost.localdomain ([101.0.63.152]) by smtp.gmail.com with ESMTPSA id l17-20020a170902eb1100b001d0b42fa98bsm8880871plb.4.2023.12.12.09.48.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 09:48:52 -0800 (PST) From: "Neeraj Upadhyay (AMD)" <neeraj.iitr10@gmail.com> To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, paulmck@kernel.org, Neeraj.Upadhyay@amd.com, Frederic Weisbecker <frederic@kernel.org>, Joel Fernandes <joel@joelfernandes.org>, Neeraj Upadhyay <neeraj.iitr10@gmail.com> Subject: [PATCH rcu 3/3] srcu: Explain why callbacks invocations can't run concurrently Date: Tue, 12 Dec 2023 23:18:17 +0530 Message-Id: <20231212174817.11919-3-neeraj.iitr10@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231212174750.GA11886@neeraj.linux> References: <20231212174750.GA11886@neeraj.linux> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM,RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 12 Dec 2023 09:49:00 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785099286514518890 X-GMAIL-MSGID: 1785099286514518890 |
Series |
SRCU updates for v6.8
|
|
Commit Message
Neeraj Upadhyay (AMD)
Dec. 12, 2023, 5:48 p.m. UTC
From: Frederic Weisbecker <frederic@kernel.org> If an SRCU barrier is queued while callbacks are running and a new callbacks invocator for the same sdp were to run concurrently, the RCU barrier might execute too early. As this requirement is non-obvious, make sure to keep a record. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> --- kernel/rcu/srcutree.c | 6 ++++++ 1 file changed, 6 insertions(+)
Comments
On Tue, Dec 12, 2023 at 12:48 PM Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> wrote: > > From: Frederic Weisbecker <frederic@kernel.org> > > If an SRCU barrier is queued while callbacks are running and a new > callbacks invocator for the same sdp were to run concurrently, the > RCU barrier might execute too early. As this requirement is non-obvious, > make sure to keep a record. > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> > --- > kernel/rcu/srcutree.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > index 2bfc8ed1eed2..0351a4e83529 100644 > --- a/kernel/rcu/srcutree.c > +++ b/kernel/rcu/srcutree.c > @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) > WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); > rcu_segcblist_advance(&sdp->srcu_cblist, > rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); > + /* > + * Although this function is theoretically re-entrant, concurrent > + * callbacks invocation is disallowed to avoid executing an SRCU barrier > + * too early. > + */ Side comment: I guess even without the barrier reasoning, it is best not to allow concurrent CB execution anyway since it diverges from the behavior of straight RCU :) - Joel
On Wed, Dec 13, 2023 at 09:27:09AM -0500, Joel Fernandes wrote: > On Tue, Dec 12, 2023 at 12:48 PM Neeraj Upadhyay (AMD) > <neeraj.iitr10@gmail.com> wrote: > > > > From: Frederic Weisbecker <frederic@kernel.org> > > > > If an SRCU barrier is queued while callbacks are running and a new > > callbacks invocator for the same sdp were to run concurrently, the > > RCU barrier might execute too early. As this requirement is non-obvious, > > make sure to keep a record. > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> > > --- > > kernel/rcu/srcutree.c | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > > index 2bfc8ed1eed2..0351a4e83529 100644 > > --- a/kernel/rcu/srcutree.c > > +++ b/kernel/rcu/srcutree.c > > @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) > > WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); > > rcu_segcblist_advance(&sdp->srcu_cblist, > > rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); > > + /* > > + * Although this function is theoretically re-entrant, concurrent > > + * callbacks invocation is disallowed to avoid executing an SRCU barrier > > + * too early. > > + */ > > Side comment: > I guess even without the barrier reasoning, it is best not to allow > concurrent CB execution anyway since it diverges from the behavior of > straight RCU :) Good point! But please do not forget item 12 on the list in checklist.rst. ;-) (Which I just updated to include the other call_rcu*() functions.) Thanx, Paul
On Wed, Dec 13, 2023 at 12:52 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > On Wed, Dec 13, 2023 at 09:27:09AM -0500, Joel Fernandes wrote: > > On Tue, Dec 12, 2023 at 12:48 PM Neeraj Upadhyay (AMD) > > <neeraj.iitr10@gmail.com> wrote: > > > > > > From: Frederic Weisbecker <frederic@kernel.org> > > > > > > If an SRCU barrier is queued while callbacks are running and a new > > > callbacks invocator for the same sdp were to run concurrently, the > > > RCU barrier might execute too early. As this requirement is non-obvious, > > > make sure to keep a record. > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > > Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> > > > --- > > > kernel/rcu/srcutree.c | 6 ++++++ > > > 1 file changed, 6 insertions(+) > > > > > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > > > index 2bfc8ed1eed2..0351a4e83529 100644 > > > --- a/kernel/rcu/srcutree.c > > > +++ b/kernel/rcu/srcutree.c > > > @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) > > > WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); > > > rcu_segcblist_advance(&sdp->srcu_cblist, > > > rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); > > > + /* > > > + * Although this function is theoretically re-entrant, concurrent > > > + * callbacks invocation is disallowed to avoid executing an SRCU barrier > > > + * too early. > > > + */ > > > > Side comment: > > I guess even without the barrier reasoning, it is best not to allow > > concurrent CB execution anyway since it diverges from the behavior of > > straight RCU :) > > Good point! > > But please do not forget item 12 on the list in checklist.rst. ;-) > (Which I just updated to include the other call_rcu*() functions.) I think this is more so now with recent kernels (with the dynamic nocb switch) than with older kernels right? I haven't kept up with the checklist recently (which is my bad). My understanding comes from the fact that the RCU barrier depends on callbacks on the same CPU executing in order with straight RCU otherwise it breaks. Hence my comment. But as you pointed out, that's outdated knowledge. I should just shut up and hide in shame now. :-/ - Joel
On Wed, Dec 13, 2023 at 01:35:22PM -0500, Joel Fernandes wrote: > On Wed, Dec 13, 2023 at 12:52 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > > > On Wed, Dec 13, 2023 at 09:27:09AM -0500, Joel Fernandes wrote: > > > On Tue, Dec 12, 2023 at 12:48 PM Neeraj Upadhyay (AMD) > > > <neeraj.iitr10@gmail.com> wrote: > > > > > > > > From: Frederic Weisbecker <frederic@kernel.org> > > > > > > > > If an SRCU barrier is queued while callbacks are running and a new > > > > callbacks invocator for the same sdp were to run concurrently, the > > > > RCU barrier might execute too early. As this requirement is non-obvious, > > > > make sure to keep a record. > > > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > > > Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> > > > > --- > > > > kernel/rcu/srcutree.c | 6 ++++++ > > > > 1 file changed, 6 insertions(+) > > > > > > > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > > > > index 2bfc8ed1eed2..0351a4e83529 100644 > > > > --- a/kernel/rcu/srcutree.c > > > > +++ b/kernel/rcu/srcutree.c > > > > @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) > > > > WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); > > > > rcu_segcblist_advance(&sdp->srcu_cblist, > > > > rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); > > > > + /* > > > > + * Although this function is theoretically re-entrant, concurrent > > > > + * callbacks invocation is disallowed to avoid executing an SRCU barrier > > > > + * too early. > > > > + */ > > > > > > Side comment: > > > I guess even without the barrier reasoning, it is best not to allow > > > concurrent CB execution anyway since it diverges from the behavior of > > > straight RCU :) > > > > Good point! > > > > But please do not forget item 12 on the list in checklist.rst. ;-) > > (Which I just updated to include the other call_rcu*() functions.) > > I think this is more so now with recent kernels (with the dynamic nocb > switch) than with older kernels right? I haven't kept up with the > checklist recently (which is my bad). You are quite correct! But even before this, I was saying that lack of same-CPU callback concurrency was an accident of the current implementation rather than a guarantee. For example, there might come a time when RCU needs to respond to callback flooding with concurrent execution of the flooded CPU's callbacks. Or not, but we do need to keep this option open. > My understanding comes from the fact that the RCU barrier depends on > callbacks on the same CPU executing in order with straight RCU > otherwise it breaks. Hence my comment. But as you pointed out, that's > outdated knowledge. That is still one motivation for ordered execution of callbacks. For the dynamic nocb switch, we could have chosen to make rcu_barrier() place a callback on both lists, but we instead chose to exclude rcu_barrier() calls during the switch. > I should just shut up and hide in shame now. No need for that! After all, one motivation for Requirements.rst was to help me keep track of all this stuff. Thanx, Paul > :-/ > > - Joel
On Wed, Dec 13, 2023 at 1:55 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > On Wed, Dec 13, 2023 at 01:35:22PM -0500, Joel Fernandes wrote: > > On Wed, Dec 13, 2023 at 12:52 PM Paul E. McKenney <paulmck@kernel.org> wrote: > > > > > > On Wed, Dec 13, 2023 at 09:27:09AM -0500, Joel Fernandes wrote: > > > > On Tue, Dec 12, 2023 at 12:48 PM Neeraj Upadhyay (AMD) > > > > <neeraj.iitr10@gmail.com> wrote: > > > > > > > > > > From: Frederic Weisbecker <frederic@kernel.org> > > > > > > > > > > If an SRCU barrier is queued while callbacks are running and a new > > > > > callbacks invocator for the same sdp were to run concurrently, the > > > > > RCU barrier might execute too early. As this requirement is non-obvious, > > > > > make sure to keep a record. > > > > > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > > > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org> > > > > > Signed-off-by: Neeraj Upadhyay (AMD) <neeraj.iitr10@gmail.com> > > > > > --- > > > > > kernel/rcu/srcutree.c | 6 ++++++ > > > > > 1 file changed, 6 insertions(+) > > > > > > > > > > diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c > > > > > index 2bfc8ed1eed2..0351a4e83529 100644 > > > > > --- a/kernel/rcu/srcutree.c > > > > > +++ b/kernel/rcu/srcutree.c > > > > > @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) > > > > > WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); > > > > > rcu_segcblist_advance(&sdp->srcu_cblist, > > > > > rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); > > > > > + /* > > > > > + * Although this function is theoretically re-entrant, concurrent > > > > > + * callbacks invocation is disallowed to avoid executing an SRCU barrier > > > > > + * too early. > > > > > + */ > > > > > > > > Side comment: > > > > I guess even without the barrier reasoning, it is best not to allow > > > > concurrent CB execution anyway since it diverges from the behavior of > > > > straight RCU :) > > > > > > Good point! > > > > > > But please do not forget item 12 on the list in checklist.rst. ;-) > > > (Which I just updated to include the other call_rcu*() functions.) > > > > I think this is more so now with recent kernels (with the dynamic nocb > > switch) than with older kernels right? I haven't kept up with the > > checklist recently (which is my bad). > > You are quite correct! But even before this, I was saying that > lack of same-CPU callback concurrency was an accident of the current > implementation rather than a guarantee. For example, there might come > a time when RCU needs to respond to callback flooding with concurrent > execution of the flooded CPU's callbacks. Or not, but we do need to > keep this option open. Got it, reminds me to focus on requirements as well along with implementation. > > My understanding comes from the fact that the RCU barrier depends on > > callbacks on the same CPU executing in order with straight RCU > > otherwise it breaks. Hence my comment. But as you pointed out, that's > > outdated knowledge. > > That is still one motivation for ordered execution of callbacks. For the > dynamic nocb switch, we could have chosen to make rcu_barrier() place > a callback on both lists, but we instead chose to exclude rcu_barrier() > calls during the switch. Right! > > I should just shut up and hide in shame now. > > No need for that! After all, one motivation for Requirements.rst was > to help me keep track of all this stuff. Thanks! - Joel
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 2bfc8ed1eed2..0351a4e83529 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1715,6 +1715,11 @@ static void srcu_invoke_callbacks(struct work_struct *work) WARN_ON_ONCE(!rcu_segcblist_segempty(&sdp->srcu_cblist, RCU_NEXT_TAIL)); rcu_segcblist_advance(&sdp->srcu_cblist, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); + /* + * Although this function is theoretically re-entrant, concurrent + * callbacks invocation is disallowed to avoid executing an SRCU barrier + * too early. + */ if (sdp->srcu_cblist_invoking || !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) { spin_unlock_irq_rcu_node(sdp); @@ -1745,6 +1750,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) sdp->srcu_cblist_invoking = false; more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); spin_unlock_irq_rcu_node(sdp); + /* An SRCU barrier or callbacks from previous nesting work pending */ if (more) srcu_schedule_cbs_sdp(sdp, 0); }