Message ID | Y3J6P3jCNmrj3tue@rowland.harvard.edu |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp2267438wru; Mon, 14 Nov 2022 09:28:29 -0800 (PST) X-Google-Smtp-Source: AA0mqf4RpziTy2V1slssKuZRJvctG1G4St3Hip3Xj49wWDkt6RfBTRDuuHL1ljvWGtr5nDUIqsYn X-Received: by 2002:a17:907:1256:b0:78d:8533:be13 with SMTP id wc22-20020a170907125600b0078d8533be13mr10671680ejb.716.1668446909577; Mon, 14 Nov 2022 09:28:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668446909; cv=none; d=google.com; s=arc-20160816; b=YnxzrGcdwpPb/wv3Ysaya5ybhExkjKbYp422tydB+akLhxFW7nwZk/WOa0sWDiqhg/ P75KMpIPmYMx52aHY/WylN65lK6UBiYSqxNQUKWTR0QzW2ILJwNz8Mzb/7XkhQdIr6bL uzcy1qhGzYuPChpvqmRblpTk4hNhQT4zGZzqLUZuJMUQeIO58w95acChtm/G864fWSfo VE95nrb/pgIclBYl9TwWW3HQjwqVnohz1BMI0qmiQ6iJJ6Ujee6efe9s5wPPH3ql3J/m WfKxhsNk2aT24t+rzPuNqf4si7iTvHgKP5X4fQ6XZeY/eLVTG9pl7NUcOiWy3Icl+01j dHGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-disposition:mime-version:message-id :subject:cc:to:from:date; bh=qcq5n237sxek2uboQBdhm4czZ5p9gdbYbcgndy5SA7U=; b=VxJs7977mXgsoCAlAA5EIHAdl7H3/GMw0Yg+B6uEkVXFUmzRs+9u/qUIhWvi+60qc+ eotlTIS4CCKD83PiAW+VlqrJ7vn+qB9WQ60v7hvro5//zmiNEeeGWe80ZRokng53aqfX tlBOvjg+09LTQ2NDa8U2InjoSMAhtcSjDLfzWGlRQuSF/tUCTSxFyWXb5+He5M4faPdq 1PMjDgSjobhxqJoQp3L2E14ULLJi9yH8oOMoAYDl231+4UhZSlBeKSk0aRp/duXMPf5q ihL6ji7jfY39PpwNMgn6o/LHIOq+TrPNGfQZT9WmoAv/7A5S8CqCdWAprLsAvx6r6DNa BGfg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p27-20020a17090635db00b007aea8f32c85si6919188ejb.390.2022.11.14.09.28.01; Mon, 14 Nov 2022 09:28:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237443AbiKNR0r (ORCPT <rfc822;zwp10758@gmail.com> + 99 others); Mon, 14 Nov 2022 12:26:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237755AbiKNR0i (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 14 Nov 2022 12:26:38 -0500 Received: from netrider.rowland.org (netrider.rowland.org [192.131.102.5]) by lindbergh.monkeyblade.net (Postfix) with SMTP id C384D2E9FC for <linux-kernel@vger.kernel.org>; Mon, 14 Nov 2022 09:26:24 -0800 (PST) Received: (qmail 139870 invoked by uid 1000); 14 Nov 2022 12:26:23 -0500 Date: Mon, 14 Nov 2022 12:26:23 -0500 From: Alan Stern <stern@rowland.harvard.edu> To: "Paul E. McKenney" <paulmck@kernel.org> Cc: parri.andrea@gmail.com, will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, npiggin@gmail.com, dhowells@redhat.com, j.alglave@ucl.ac.uk, luc.maranget@inria.fr, akiyks@gmail.com, dlustig@nvidia.com, joel@joelfernandes.org, urezki@gmail.com, quic_neeraju@quicinc.com, frederic@kernel.org, jonas.oberhauser@huawei.com, Kernel development list <linux-kernel@vger.kernel.org> Subject: [PATCH] tools: memory-model: Add rmw-sequences to the LKMM Message-ID: <Y3J6P3jCNmrj3tue@rowland.harvard.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,SPF_HELO_PASS,SPF_PASS autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749493386983843188?= X-GMAIL-MSGID: =?utf-8?q?1749493386983843188?= |
Series |
tools: memory-model: Add rmw-sequences to the LKMM
|
|
Commit Message
Alan Stern
Nov. 14, 2022, 5:26 p.m. UTC
Jonas has pointed out a weakness in the Linux Kernel Memory Model.
Namely, the memory ordering properties of atomic operations are not
monotonic: An atomic op with full-barrier semantics does not always
provide ordering as strong as one with release-barrier semantics.
The following litmus test illustrates the problem:
--------------------------------------------------
C atomics-not-monotonic
{}
P0(int *x, atomic_t *y)
{
WRITE_ONCE(*x, 1);
smp_wmb();
atomic_set(y, 1);
}
P1(atomic_t *y)
{
int r1;
r1 = atomic_inc_return(y);
}
P2(int *x, atomic_t *y)
{
int r2;
int r3;
r2 = atomic_read(y);
smp_rmb();
r3 = READ_ONCE(*x);
}
exists (2:r2=2 /\ 2:r3=0)
--------------------------------------------------
The litmus test is allowed as shown with atomic_inc_return(), which
has full-barrier semantics. But if the operation is changed to
atomic_inc_return_release(), which only has release-barrier semantics,
the litmus test is forbidden. Clearly this violates monotonicity.
The reason is because the LKMM treats full-barrier atomic ops as if
they were written:
mb();
load();
store();
mb();
(where the load() and store() are the two parts of an atomic RMW op),
whereas it treats release-barrier atomic ops as if they were written:
load();
release_barrier();
store();
The difference is that here the release barrier orders the load part
of the atomic op before the store part with A-cumulativity, whereas
the mb()'s above do not. This means that release-barrier atomics can
effectively extend the cumul-fence relation but full-barrier atomics
cannot.
To resolve this problem we introduce the rmw-sequence relation,
representing an arbitrarily long sequence of atomic RMW operations in
which each operation reads from the previous one, and explicitly allow
it to extend cumul-fence. This modification of the memory model is
sound; it holds for PPC because of B-cumulativity, it holds for TSO
and ARM64 because of other-multicopy atomicity, and we can assume that
atomic ops on all other architectures will be implemented so as to
make it hold for them.
For similar reasons we also allow rmw-sequence to extend the
w-post-bounded relation, which is analogous to cumul-fence in some
ways.
Suggested-by: Jonas Oberhauser <jonas.oberhauser@huawei.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
---
tools/memory-model/Documentation/explanation.txt | 28 +++++++++++++++++++++++
tools/memory-model/linux-kernel.cat | 5 ++--
2 files changed, 31 insertions(+), 2 deletions(-)
Comments
On Mon, Nov 14, 2022 at 12:26:23PM -0500, Alan Stern wrote: > Jonas has pointed out a weakness in the Linux Kernel Memory Model. > Namely, the memory ordering properties of atomic operations are not > monotonic: An atomic op with full-barrier semantics does not always > provide ordering as strong as one with release-barrier semantics. > > The following litmus test illustrates the problem: > > -------------------------------------------------- > C atomics-not-monotonic > > {} > > P0(int *x, atomic_t *y) > { > WRITE_ONCE(*x, 1); > smp_wmb(); > atomic_set(y, 1); > } > > P1(atomic_t *y) > { > int r1; > > r1 = atomic_inc_return(y); > } > > P2(int *x, atomic_t *y) > { > int r2; > int r3; > > r2 = atomic_read(y); > smp_rmb(); > r3 = READ_ONCE(*x); > } > > exists (2:r2=2 /\ 2:r3=0) > -------------------------------------------------- > > The litmus test is allowed as shown with atomic_inc_return(), which > has full-barrier semantics. But if the operation is changed to > atomic_inc_return_release(), which only has release-barrier semantics, > the litmus test is forbidden. Clearly this violates monotonicity. > > The reason is because the LKMM treats full-barrier atomic ops as if > they were written: > > mb(); > load(); > store(); > mb(); > > (where the load() and store() are the two parts of an atomic RMW op), > whereas it treats release-barrier atomic ops as if they were written: > > load(); > release_barrier(); > store(); > > The difference is that here the release barrier orders the load part > of the atomic op before the store part with A-cumulativity, whereas > the mb()'s above do not. This means that release-barrier atomics can > effectively extend the cumul-fence relation but full-barrier atomics > cannot. > > To resolve this problem we introduce the rmw-sequence relation, > representing an arbitrarily long sequence of atomic RMW operations in > which each operation reads from the previous one, and explicitly allow > it to extend cumul-fence. This modification of the memory model is > sound; it holds for PPC because of B-cumulativity, it holds for TSO > and ARM64 because of other-multicopy atomicity, and we can assume that > atomic ops on all other architectures will be implemented so as to > make it hold for them. > > For similar reasons we also allow rmw-sequence to extend the > w-post-bounded relation, which is analogous to cumul-fence in some > ways. > > Suggested-by: Jonas Oberhauser <jonas.oberhauser@huawei.com> > Signed-off-by: Alan Stern <stern@rowland.harvard.edu> > Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Thanks! Regards, Boqun > --- > > tools/memory-model/Documentation/explanation.txt | 28 +++++++++++++++++++++++ > tools/memory-model/linux-kernel.cat | 5 ++-- > 2 files changed, 31 insertions(+), 2 deletions(-) > > Index: usb-devel/tools/memory-model/linux-kernel.cat > =================================================================== > --- usb-devel.orig/tools/memory-model/linux-kernel.cat > +++ usb-devel/tools/memory-model/linux-kernel.cat > @@ -74,8 +74,9 @@ let ppo = to-r | to-w | fence | (po-unlo > > (* Propagation: Ordering from release operations and strong fences. *) > let A-cumul(r) = (rfe ; [Marked])? ; r > +let rmw-sequence = (rf ; rmw)* > let cumul-fence = [Marked] ; (A-cumul(strong-fence | po-rel) | wmb | > - po-unlock-lock-po) ; [Marked] > + po-unlock-lock-po) ; [Marked] ; rmw-sequence > let prop = [Marked] ; (overwrite & ext)? ; cumul-fence* ; > [Marked] ; rfe? ; [Marked] > > @@ -174,7 +175,7 @@ let vis = cumul-fence* ; rfe? ; [Marked] > let w-pre-bounded = [Marked] ; (addr | fence)? > let r-pre-bounded = [Marked] ; (addr | nonrw-fence | > ([R4rmb] ; fencerel(Rmb) ; [~Noreturn]))? > -let w-post-bounded = fence? ; [Marked] > +let w-post-bounded = fence? ; [Marked] ; rmw-sequence > let r-post-bounded = (nonrw-fence | ([~Noreturn] ; fencerel(Rmb) ; [R4rmb]))? ; > [Marked] > > Index: usb-devel/tools/memory-model/Documentation/explanation.txt > =================================================================== > --- usb-devel.orig/tools/memory-model/Documentation/explanation.txt > +++ usb-devel/tools/memory-model/Documentation/explanation.txt > @@ -1006,6 +1006,34 @@ order. Equivalently, > where the rmw relation links the read and write events making up each > atomic update. This is what the LKMM's "atomic" axiom says. > > +Atomic rmw updates play one more role in the LKMM: They can form "rmw > +sequences". An rmw sequence is simply a bunch of atomic updates where > +each update reads from the previous one. Written using events, it > +looks like this: > + > + Z0 ->rf Y1 ->rmw Z1 ->rf ... ->rf Yn ->rmw Zn, > + > +where Z0 is some store event and n can be any number (even 0, in the > +degenerate case). We write this relation as: Z0 ->rmw-sequence Zn. > +Note that this implies Z0 and Zn are stores to the same variable. > + > +Rmw sequences have a special property in the LKMM: They can extend the > +cumul-fence relation. That is, if we have: > + > + U ->cumul-fence X -> rmw-sequence Y > + > +then also U ->cumul-fence Y. Thinking about this in terms of the > +operational model, U ->cumul-fence X says that the store U propagates > +to each CPU before the store X does. Then the fact that X and Y are > +linked by an rmw sequence means that U also propagates to each CPU > +before Y does. > + > +(The notion of rmw sequences in the LKMM is similar to, but not quite > +the same as, that of release sequences in the C11 memory model. They > +were added to the LKMM to fix an obscure bug; without them, atomic > +updates with full-barrier semantics did not always guarantee ordering > +at least as strong as atomic updates with release-barrier semantics.) > + > > THE PRESERVED PROGRAM ORDER RELATION: ppo > ----------------------------------------- >
On Tue, Nov 15, 2022 at 02:05:39PM +0000, Jonas Oberhauser wrote: > > > > > > -----Original Message----- > > From: Alan Stern [mailto:stern@rowland.harvard.edu] > > Sent: Monday, November 14, 2022 6:26 PM > > Hi Alan, > thanks for preparing this! > > > Jonas has pointed out a weakness in the Linux Kernel Memory Model. > > Namely, the memory ordering properties of atomic operations are not > > monotonic: An atomic op with full-barrier semantics does not always provide ordering as strong as one with release-barrier semantics. > > Note that I believe it was Viktor who originally pointed out this weakness to me > in private communication. My contribution (besides chatting with you) is to > check that the solution does indeed restore the monotonicity (not just on some > litmus tests but in general). > > So I would change the wording to "Viktor has pointed out a weakness in the Linux > Kernel Memory Model." People will wonder who Viktor is. I don't have his full name or email address. In fact, shouldn't he have been CC'ed during this entire discussion? > > +let rmw-sequence = (rf ; rmw)* > > I would perhaps suggest to only consider external read-from in rmw-sequences, as > below: > +let rmw-sequence = (rfe ; rmw)* We discussed the matter earlier, and I don't recall any mention of this objection. > The reason I (slightly) prefer this is that this is sufficient to imply > monotonicity. > Also there is some minor concern that the patch that results in the stricter > model (i.e., rmw-sequence = (rf ; rmw)*) might be incorrect on some hypothetical > future architecture in which RMWs can be merged in the store coalescing queue > with earlier stores to the same location. This is exemplified in the following > litmus test: > > C atomics-not-monotonic-2 > > {} > > P0(int *x, atomic_t *y) > { > int r1; > WRITE_ONCE(*x, 1); > smp_store_release(y, 0); > r1 = atomic_inc_return_relaxed(y); > } > > P1(atomic_t *y) > { > int r1; > > r1 = atomic_inc_return(y); > } > > P2(int *x, atomic_t *y) > { > int r2; > int r3; > > r2 = atomic_read(y); > smp_rmb(); > r3 = READ_ONCE(*x); > } > > exists (2:r2=2 /\ 2:r3=0) > > Here such a hypothetical future architecture could merge the operations to *y by > P0 into a single store, effectively turning the code of P0 into > > P0(int *x, atomic_t *y) > { > int r1; > WRITE_ONCE(*x, 1); > WRITE_ONCE(*y, 1); > r1 = 0; > } > > The stricter patch would not be sound with this hypothetical architecture, while > the more relaxed patch should be. > > I don't think such a future architecture is likely since I don't expect there to > be any practical performance impact. At the same time I also don't currently see > any advantage of the stricter model. > > For this reason I would slightly prefer the more relaxed model. I don't see any point in worrying about hypothetical future architectures that might use a questionable design. Also, given that this test is forbidden: P0 P1 P2 ------------------------- -------------- ---------------------------- WRITE_ONCE(*x, 1); atomic_inc(y); r1 = atomic_read_acquire(y); atomic_set_release(y, 1); r2 = READ_ONCE(*x); exists (2:r1=2 /\ 2:r2=0) shouldn't the following also be forbidden? P0 P1 P2 ------------------------- -------------- ---------------------------- WRITE_ONCE(*x, 1); atomic_inc(y); r1 = atomic_read_acquire(y); atomic_set_release(y, 1); atomic_inc(y); r2 = READ_ONCE(*x); exists (2:r1=3 /\ 2:r2=0) > > +Rmw sequences have a special property in the LKMM: They can extend the > > +cumul-fence relation. That is, if we have: > > + > > + U ->cumul-fence X -> rmw-sequence Y > > + > > +then also U ->cumul-fence Y. Thinking about this in terms of the > > +operational model, U ->cumul-fence X says that the store U propagates > > +to each CPU before the store X does. Then the fact that X and Y are > > +linked by an rmw sequence means that U also propagates to each CPU > > +before Y does. > > + > > Here I would add that the rmw sequences also play a similar role in the > w-post-bounded relation. For example as follows: > > +Rmw sequences have a special property in the LKMM: They can extend the > +cumul-fence and w-post-bounded relations. That is, if we have: > + > + U ->cumul-fence X -> rmw-sequence Y > + > +then also U ->cumul-fence Y, and analogously if we have > + > + U ->w-post-bounded X -> rmw-sequence Y > + > +then also U ->w-post-bounded Y. Thinking about this in terms of the > +operational model, U ->cumul-fence X says that the store U propagates > +to each CPU before the store X does. Then the fact that X and Y are > +linked by an rmw sequence means that U also propagates to each CPU > +before Y does. > + I considered this and specifically decided against it, because the w-post-bounded relation has not yet been introduced at this point in the document. It doesn't show up until much later. (Also, there didn't seem to be any graceful way of mentioning this fact at the point where w-post-bounded does get introduced, and on the whole the matter didn't seem to be all that important.) Instead of your suggested change, I suppose it would be okay to say, at the end of the paragraph: (In an analogous way, rmw sequences can extend the w-post-bounded relation defined below in the PLAIN ACCESSES AND DATA RACES section.) Or something like this could be added to the ODDS AND ENDS section at the end of the document. Alan
On Tue, Nov 15, 2022 at 11:13:12AM -0500, Alan Stern wrote: > On Tue, Nov 15, 2022 at 02:05:39PM +0000, Jonas Oberhauser wrote: > > > > > > > > > > > -----Original Message----- > > > From: Alan Stern [mailto:stern@rowland.harvard.edu] > > > Sent: Monday, November 14, 2022 6:26 PM > > > > Hi Alan, > > thanks for preparing this! > > > > > Jonas has pointed out a weakness in the Linux Kernel Memory Model. > > > Namely, the memory ordering properties of atomic operations are not > > > monotonic: An atomic op with full-barrier semantics does not always provide ordering as strong as one with release-barrier semantics. > > > > Note that I believe it was Viktor who originally pointed out this weakness to me > > in private communication. My contribution (besides chatting with you) is to > > check that the solution does indeed restore the monotonicity (not just on some > > litmus tests but in general). > > > > So I would change the wording to "Viktor has pointed out a weakness in the Linux > > Kernel Memory Model." > > People will wonder who Viktor is. I don't have his full name or email > address. In fact, shouldn't he have been CC'ed during this entire > discussion? Viktor Vafeiadis <viktor@mpi-sws.org> But I defer to Jonas on CCing, just in case Viktor needs to be provided context on this discussion. Thanx, Paul > > > +let rmw-sequence = (rf ; rmw)* > > > > I would perhaps suggest to only consider external read-from in rmw-sequences, as > > below: > > +let rmw-sequence = (rfe ; rmw)* > > We discussed the matter earlier, and I don't recall any mention of this > objection. > > > The reason I (slightly) prefer this is that this is sufficient to imply > > monotonicity. > > Also there is some minor concern that the patch that results in the stricter > > model (i.e., rmw-sequence = (rf ; rmw)*) might be incorrect on some hypothetical > > future architecture in which RMWs can be merged in the store coalescing queue > > with earlier stores to the same location. This is exemplified in the following > > litmus test: > > > > C atomics-not-monotonic-2 > > > > {} > > > > P0(int *x, atomic_t *y) > > { > > int r1; > > WRITE_ONCE(*x, 1); > > smp_store_release(y, 0); > > r1 = atomic_inc_return_relaxed(y); > > } > > > > P1(atomic_t *y) > > { > > int r1; > > > > r1 = atomic_inc_return(y); > > } > > > > P2(int *x, atomic_t *y) > > { > > int r2; > > int r3; > > > > r2 = atomic_read(y); > > smp_rmb(); > > r3 = READ_ONCE(*x); > > } > > > > exists (2:r2=2 /\ 2:r3=0) > > > > Here such a hypothetical future architecture could merge the operations to *y by > > P0 into a single store, effectively turning the code of P0 into > > > > P0(int *x, atomic_t *y) > > { > > int r1; > > WRITE_ONCE(*x, 1); > > WRITE_ONCE(*y, 1); > > r1 = 0; > > } > > > > The stricter patch would not be sound with this hypothetical architecture, while > > the more relaxed patch should be. > > > > I don't think such a future architecture is likely since I don't expect there to > > be any practical performance impact. At the same time I also don't currently see > > any advantage of the stricter model. > > > > For this reason I would slightly prefer the more relaxed model. > > I don't see any point in worrying about hypothetical future > architectures that might use a questionable design. > > Also, given that this test is forbidden: > > P0 P1 P2 > ------------------------- -------------- ---------------------------- > WRITE_ONCE(*x, 1); atomic_inc(y); r1 = atomic_read_acquire(y); > atomic_set_release(y, 1); r2 = READ_ONCE(*x); > exists (2:r1=2 /\ 2:r2=0) > > shouldn't the following also be forbidden? > > P0 P1 P2 > ------------------------- -------------- ---------------------------- > WRITE_ONCE(*x, 1); atomic_inc(y); r1 = atomic_read_acquire(y); > atomic_set_release(y, 1); atomic_inc(y); r2 = READ_ONCE(*x); > exists (2:r1=3 /\ 2:r2=0) > > > > +Rmw sequences have a special property in the LKMM: They can extend the > > > +cumul-fence relation. That is, if we have: > > > + > > > + U ->cumul-fence X -> rmw-sequence Y > > > + > > > +then also U ->cumul-fence Y. Thinking about this in terms of the > > > +operational model, U ->cumul-fence X says that the store U propagates > > > +to each CPU before the store X does. Then the fact that X and Y are > > > +linked by an rmw sequence means that U also propagates to each CPU > > > +before Y does. > > > + > > > > Here I would add that the rmw sequences also play a similar role in the > > w-post-bounded relation. For example as follows: > > > > +Rmw sequences have a special property in the LKMM: They can extend the > > +cumul-fence and w-post-bounded relations. That is, if we have: > > + > > + U ->cumul-fence X -> rmw-sequence Y > > + > > +then also U ->cumul-fence Y, and analogously if we have > > + > > + U ->w-post-bounded X -> rmw-sequence Y > > + > > +then also U ->w-post-bounded Y. Thinking about this in terms of the > > +operational model, U ->cumul-fence X says that the store U propagates > > +to each CPU before the store X does. Then the fact that X and Y are > > +linked by an rmw sequence means that U also propagates to each CPU > > +before Y does. > > + > > I considered this and specifically decided against it, because the > w-post-bounded relation has not yet been introduced at this point in the > document. It doesn't show up until much later. (Also, there didn't > seem to be any graceful way of mentioning this fact at the point where > w-post-bounded does get introduced, and on the whole the matter didn't > seem to be all that important.) > > Instead of your suggested change, I suppose it would be okay to say, at > the end of the paragraph: > > (In an analogous way, rmw sequences can extend the > w-post-bounded relation defined below in the PLAIN ACCESSES > AND DATA RACES section.) > > Or something like this could be added to the ODDS AND ENDS section at > the end of the document. > > Alan
Index: usb-devel/tools/memory-model/linux-kernel.cat =================================================================== --- usb-devel.orig/tools/memory-model/linux-kernel.cat +++ usb-devel/tools/memory-model/linux-kernel.cat @@ -74,8 +74,9 @@ let ppo = to-r | to-w | fence | (po-unlo (* Propagation: Ordering from release operations and strong fences. *) let A-cumul(r) = (rfe ; [Marked])? ; r +let rmw-sequence = (rf ; rmw)* let cumul-fence = [Marked] ; (A-cumul(strong-fence | po-rel) | wmb | - po-unlock-lock-po) ; [Marked] + po-unlock-lock-po) ; [Marked] ; rmw-sequence let prop = [Marked] ; (overwrite & ext)? ; cumul-fence* ; [Marked] ; rfe? ; [Marked] @@ -174,7 +175,7 @@ let vis = cumul-fence* ; rfe? ; [Marked] let w-pre-bounded = [Marked] ; (addr | fence)? let r-pre-bounded = [Marked] ; (addr | nonrw-fence | ([R4rmb] ; fencerel(Rmb) ; [~Noreturn]))? -let w-post-bounded = fence? ; [Marked] +let w-post-bounded = fence? ; [Marked] ; rmw-sequence let r-post-bounded = (nonrw-fence | ([~Noreturn] ; fencerel(Rmb) ; [R4rmb]))? ; [Marked] Index: usb-devel/tools/memory-model/Documentation/explanation.txt =================================================================== --- usb-devel.orig/tools/memory-model/Documentation/explanation.txt +++ usb-devel/tools/memory-model/Documentation/explanation.txt @@ -1006,6 +1006,34 @@ order. Equivalently, where the rmw relation links the read and write events making up each atomic update. This is what the LKMM's "atomic" axiom says. +Atomic rmw updates play one more role in the LKMM: They can form "rmw +sequences". An rmw sequence is simply a bunch of atomic updates where +each update reads from the previous one. Written using events, it +looks like this: + + Z0 ->rf Y1 ->rmw Z1 ->rf ... ->rf Yn ->rmw Zn, + +where Z0 is some store event and n can be any number (even 0, in the +degenerate case). We write this relation as: Z0 ->rmw-sequence Zn. +Note that this implies Z0 and Zn are stores to the same variable. + +Rmw sequences have a special property in the LKMM: They can extend the +cumul-fence relation. That is, if we have: + + U ->cumul-fence X -> rmw-sequence Y + +then also U ->cumul-fence Y. Thinking about this in terms of the +operational model, U ->cumul-fence X says that the store U propagates +to each CPU before the store X does. Then the fact that X and Y are +linked by an rmw sequence means that U also propagates to each CPU +before Y does. + +(The notion of rmw sequences in the LKMM is similar to, but not quite +the same as, that of release sequences in the C11 memory model. They +were added to the LKMM to fix an obscure bug; without them, atomic +updates with full-barrier semantics did not always guarantee ordering +at least as strong as atomic updates with release-barrier semantics.) + THE PRESERVED PROGRAM ORDER RELATION: ppo -----------------------------------------