Message ID | 20230518224008.2468-5-sj@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp848338vqo; Thu, 18 May 2023 15:52:37 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5BnMUrtPLBJATx6LR6MSDeGW/ZtntiA1u1VyecgCYjjkaz1+2TM9FTC6NsqrReWVMBcG9V X-Received: by 2002:a17:902:ce83:b0:1ad:c1c2:7d14 with SMTP id f3-20020a170902ce8300b001adc1c27d14mr634712plg.46.1684450356837; Thu, 18 May 2023 15:52:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684450356; cv=none; d=google.com; s=arc-20160816; b=odciq4vLDPr5xXQVTzdv8OvOkuvXYnZ++T2umRgF2Y5dN1uf6f8zEOqNxmCNHZ/Ss6 abjS/iXvwGQavBB3wyR2o1rm1V8XRnCTOnMdDElUh6aHlz9P1EmSCpKlWwoqNZcl48PU CIbHi+98rX3dklQl5nfxnEMBoGvOm5tiKpNKvY9od0OukMmTnkP6oECfzdXc7P1nwVjC g2s9yVBCMtMXlFHp2snGWR3JyJe7boJEcMdlmMWBoy9lRxSlb8UERP/WkeMq57i6bbMc BoO8hSrlMCH3C6mzizoqAlUwCQl1HErDq+YdLiLj7cT6cj8cKQS5mauGGjCU2Thbx/pW AXuA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5CZ/ahp6QbzIwrHeRcNr1YUJFcFXDR5dAmZrwm//MxA=; b=Xwj0kX+1S7ZoWs6fVgL5Sbs8MZsHcYu2JOlERaU5HeRBGpJrrCyQDe+lDpToxB5uQA TgxARegxChZ9s9jwdte2BmzCUGEUepN66DYDKp9LjQhQcz2+Ypg517HEFxBFjM+XNAjh uxkItCat2NS6E70VF/Jdb/HOoWNw6pdaOHR9Z9WYhLpf2eypVJyLg/t30N+T7xZ6St4g WiN38lnpScWgRllC65ix17+UNV7LaQH+oR2iCx9Oz7/ijx9183UlhoMDH53Lr4lxDj6A IQnc9xwu1MJjK7WTRUfGHCs+lYz1FsNSCDykoV+yeN4XXQaFfe7iPw4g8DR9T3HLfWY/ yZNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ucL2a7Mg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z4-20020a170902834400b001adc5bc4d8asi2160551pln.572.2023.05.18.15.52.20; Thu, 18 May 2023 15:52:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ucL2a7Mg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231178AbjERWk2 (ORCPT <rfc822;pacteraone@gmail.com> + 99 others); Thu, 18 May 2023 18:40:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230436AbjERWkW (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Thu, 18 May 2023 18:40:22 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B5D08E6E; Thu, 18 May 2023 15:40:21 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 96DFB65290; Thu, 18 May 2023 22:40:21 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8D96CC4339C; Thu, 18 May 2023 22:40:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1684449621; bh=mYcPMMrN9V5gypfz26UfLD+vMchASHDokjM0vkhV+Wk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ucL2a7MgZPPqtpXsUTgCswQPPRnexmjCSgdKfPjcP9HjKQ6yaF9WpZbAaPDPjRry9 3uskc28qepqVcPcSg1/M6GwpDedvHESb98wt74Sc+YnspeuSz6Yrmxws30HPnO5JjU XeOTyqZqf7wyLLYouBMz2jaJogzjHzMsmpq86e1WDoOii2tmUpLux40hFMBQaZKbYI Yw+ze0IEjJfdpQYuqXX4HaY/DxINFLsl2Xn7PC18tHnClFpWIvbj2voBBQKBpDOMiA iV6xC4AZkXsJmSzoczNly6SzZXlp7oWaiDzHmxbOY1+I5BkiuyXXxDMTKHuRuWD8/l 0T3wu7dtCsPOQ== From: SeongJae Park <sj@kernel.org> To: paulmck@kernel.org Cc: SeongJae Park <sj@kernel.org>, joel@joelfernandes.org, corbet@lwn.net, rcu@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/4] Docs/RCU/rculist_nulls: Drop unnecessary '_release' in insert function Date: Thu, 18 May 2023 22:40:08 +0000 Message-Id: <20230518224008.2468-5-sj@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230518224008.2468-1-sj@kernel.org> References: <20230518224008.2468-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1766274217473869964?= X-GMAIL-MSGID: =?utf-8?q?1766274217473869964?= |
Series |
Docs/RCU/rculist_nulls: Minor fixups
|
|
Commit Message
SeongJae Park
May 18, 2023, 10:40 p.m. UTC
The document says we can avoid extra smp_rmb() in lockless_lookup() and
extra _release() in insert function when hlist_nulls is used. However,
the example code snippet for the insert function is still using the
extra _release(). Drop it.
Signed-off-by: SeongJae Park <sj@kernel.org>
---
Documentation/RCU/rculist_nulls.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Comments
On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: > > The document says we can avoid extra smp_rmb() in lockless_lookup() and > extra _release() in insert function when hlist_nulls is used. However, > the example code snippet for the insert function is still using the > extra _release(). Drop it. > > Signed-off-by: SeongJae Park <sj@kernel.org> > --- > Documentation/RCU/rculist_nulls.rst | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst > index 5cd6f3f8810f..463270273d89 100644 > --- a/Documentation/RCU/rculist_nulls.rst > +++ b/Documentation/RCU/rculist_nulls.rst > @@ -191,7 +191,7 @@ scan the list again without harm. > obj = kmem_cache_alloc(cachep); > lock_chain(); // typically a spin_lock() > obj->key = key; > - atomic_set_release(&obj->refcnt, 1); // key before refcnt > + atomic_set(&obj->refcnt, 1); > /* > * insert obj in RCU way (readers might be traversing chain) > */ If write to ->refcnt of 1 is reordered with setting of ->key, what prevents the 'lookup algorithm' from doing a key match (obj->key == key) before the refcount has been initialized? Are we sure the reordering mentioned in the document is the same as the reordering prevented by the atomic_set_release()? For the other 3 patches, feel free to add: Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> thanks, - Joel
On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@joelfernandes.org> wrote: > On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: > > > > The document says we can avoid extra smp_rmb() in lockless_lookup() and > > extra _release() in insert function when hlist_nulls is used. However, > > the example code snippet for the insert function is still using the > > extra _release(). Drop it. > > > > Signed-off-by: SeongJae Park <sj@kernel.org> > > --- > > Documentation/RCU/rculist_nulls.rst | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst > > index 5cd6f3f8810f..463270273d89 100644 > > --- a/Documentation/RCU/rculist_nulls.rst > > +++ b/Documentation/RCU/rculist_nulls.rst > > @@ -191,7 +191,7 @@ scan the list again without harm. > > obj = kmem_cache_alloc(cachep); > > lock_chain(); // typically a spin_lock() > > obj->key = key; > > - atomic_set_release(&obj->refcnt, 1); // key before refcnt > > + atomic_set(&obj->refcnt, 1); > > /* > > * insert obj in RCU way (readers might be traversing chain) > > */ > > If write to ->refcnt of 1 is reordered with setting of ->key, what > prevents the 'lookup algorithm' from doing a key match (obj->key == > key) before the refcount has been initialized? > > Are we sure the reordering mentioned in the document is the same as > the reordering prevented by the atomic_set_release()? Paul, may I ask your opinion? Thanks, SJ > > For the other 3 patches, feel free to add: > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > thanks, > > - Joel
On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote: > On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@joelfernandes.org> wrote: > > > On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: > > > > > > The document says we can avoid extra smp_rmb() in lockless_lookup() and > > > extra _release() in insert function when hlist_nulls is used. However, > > > the example code snippet for the insert function is still using the > > > extra _release(). Drop it. > > > > > > Signed-off-by: SeongJae Park <sj@kernel.org> > > > --- > > > Documentation/RCU/rculist_nulls.rst | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst > > > index 5cd6f3f8810f..463270273d89 100644 > > > --- a/Documentation/RCU/rculist_nulls.rst > > > +++ b/Documentation/RCU/rculist_nulls.rst > > > @@ -191,7 +191,7 @@ scan the list again without harm. > > > obj = kmem_cache_alloc(cachep); > > > lock_chain(); // typically a spin_lock() > > > obj->key = key; > > > - atomic_set_release(&obj->refcnt, 1); // key before refcnt > > > + atomic_set(&obj->refcnt, 1); > > > /* > > > * insert obj in RCU way (readers might be traversing chain) > > > */ > > > > If write to ->refcnt of 1 is reordered with setting of ->key, what > > prevents the 'lookup algorithm' from doing a key match (obj->key == > > key) before the refcount has been initialized? > > > > Are we sure the reordering mentioned in the document is the same as > > the reordering prevented by the atomic_set_release()? > > Paul, may I ask your opinion? The next line of code is this: hlist_nulls_add_head_rcu(&obj->obj_node, list); If I understand the code correctly, obj (and thus *obj) are not visible to readers before the hlist_nulls_add_head_rcu(). And hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that initialization (including both ->key and ->refcnt) is ordered before list insertion. Except that this memory is being allocated from a slab cache that was created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers who gained a reference before this object was freed, and who still hold their references. Unfortunately, the implementation of try_get_ref() is not shown. However, if ->refcnt is non-zero, this can succeed, and if it succeeds, we need the subsequent check of obj->key with key in the lookup algorithm to be stable. For this check to be stable, try_get_ref() needs to use an atomic operation with at least acquire semantics (kref_get_unless_zero() would work), and this must pair with something in the initialization. So I don't see how it is safe to weaken that atomic_set_release() to atomic_set(), even on x86. Or am I missing something subtle here? Thanx, Paul > Thanks, > SJ > > > > > For the other 3 patches, feel free to add: > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > > > thanks, > > > > - Joel
On Fri, 9 Jun 2023 16:42:59 -0700 "Paul E. McKenney" <paulmck@kernel.org> wrote: > On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote: > > On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@joelfernandes.org> wrote: > > > > > On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: > > > > > > > > The document says we can avoid extra smp_rmb() in lockless_lookup() and > > > > extra _release() in insert function when hlist_nulls is used. However, > > > > the example code snippet for the insert function is still using the > > > > extra _release(). Drop it. > > > > > > > > Signed-off-by: SeongJae Park <sj@kernel.org> > > > > --- > > > > Documentation/RCU/rculist_nulls.rst | 2 +- > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst > > > > index 5cd6f3f8810f..463270273d89 100644 > > > > --- a/Documentation/RCU/rculist_nulls.rst > > > > +++ b/Documentation/RCU/rculist_nulls.rst > > > > @@ -191,7 +191,7 @@ scan the list again without harm. > > > > obj = kmem_cache_alloc(cachep); > > > > lock_chain(); // typically a spin_lock() > > > > obj->key = key; > > > > - atomic_set_release(&obj->refcnt, 1); // key before refcnt > > > > + atomic_set(&obj->refcnt, 1); > > > > /* > > > > * insert obj in RCU way (readers might be traversing chain) > > > > */ > > > > > > If write to ->refcnt of 1 is reordered with setting of ->key, what > > > prevents the 'lookup algorithm' from doing a key match (obj->key == > > > key) before the refcount has been initialized? > > > > > > Are we sure the reordering mentioned in the document is the same as > > > the reordering prevented by the atomic_set_release()? > > > > Paul, may I ask your opinion? > > The next line of code is this: > > hlist_nulls_add_head_rcu(&obj->obj_node, list); > > If I understand the code correctly, obj (and thus *obj) are not > visible to readers before the hlist_nulls_add_head_rcu(). And > hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that > initialization (including both ->key and ->refcnt) is ordered before > list insertion. > > Except that this memory is being allocated from a slab cache that was > created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers > who gained a reference before this object was freed, and who still hold > their references. > > Unfortunately, the implementation of try_get_ref() is not shown. However, > if ->refcnt is non-zero, this can succeed, and if it succeeds, we need > the subsequent check of obj->key with key in the lookup algorithm to > be stable. For this check to be stable, try_get_ref() needs to use an > atomic operation with at least acquire semantics (kref_get_unless_zero() > would work), and this must pair with something in the initialization. > > So I don't see how it is safe to weaken that atomic_set_release() to > atomic_set(), even on x86. Thank you for the nice explanation, and I agree. > > Or am I missing something subtle here? I found the text is saying extra _release() in insert function is not needed[1], and I thought it means the atomic_set_release(). Am I misreading it? If not, would it be better to fix the text, for example, like below? ``` --- a/Documentation/RCU/rculist_nulls.rst +++ b/Documentation/RCU/rculist_nulls.rst @@ -129,8 +129,7 @@ very very fast (before the end of RCU grace period) Avoiding extra smp_rmb() ======================== -With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() -and extra _release() in insert function. +With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup(). For example, if we choose to store the slot number as the 'nulls' end-of-list marker for each slot of the hash table, we can detect @@ -182,6 +181,9 @@ scan the list again without harm. 2) Insert algorithm ------------------- +Same to the above one, but uses hlist_nulls_add_head_rcu() instead of +hlist_add_head_rcu(). + :: /* @@ -191,7 +193,7 @@ scan the list again without harm. obj = kmem_cache_alloc(cachep); lock_chain(); // typically a spin_lock() obj->key = key; - atomic_set_release(&obj->refcnt, 1); // key before refcnt + atomic_set(&obj->refcnt, 1); /* * insert obj in RCU way (readers might be traversing chain) */ ``` [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/RCU/rculist_nulls.rst#n133 Thanks, SJ > > Thanx, Paul > > > Thanks, > > SJ > > > > > > > > For the other 3 patches, feel free to add: > > > Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> > > > > > > thanks, > > > > > > - Joel
Hi Paul, > 2023年6月10日 07:42,Paul E. McKenney <paulmck@kernel.org> 写道: > > On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote: >> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@joelfernandes.org> wrote: >> >>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: >>>> >>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and >>>> extra _release() in insert function when hlist_nulls is used. However, >>>> the example code snippet for the insert function is still using the >>>> extra _release(). Drop it. >>>> >>>> Signed-off-by: SeongJae Park <sj@kernel.org> >>>> --- >>>> Documentation/RCU/rculist_nulls.rst | 2 +- >>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>> >>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst >>>> index 5cd6f3f8810f..463270273d89 100644 >>>> --- a/Documentation/RCU/rculist_nulls.rst >>>> +++ b/Documentation/RCU/rculist_nulls.rst >>>> @@ -191,7 +191,7 @@ scan the list again without harm. >>>> obj = kmem_cache_alloc(cachep); >>>> lock_chain(); // typically a spin_lock() >>>> obj->key = key; >>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt >>>> + atomic_set(&obj->refcnt, 1); >>>> /* >>>> * insert obj in RCU way (readers might be traversing chain) >>>> */ >>> >>> If write to ->refcnt of 1 is reordered with setting of ->key, what >>> prevents the 'lookup algorithm' from doing a key match (obj->key == >>> key) before the refcount has been initialized? >>> >>> Are we sure the reordering mentioned in the document is the same as >>> the reordering prevented by the atomic_set_release()? >> >> Paul, may I ask your opinion? > > The next line of code is this: > > hlist_nulls_add_head_rcu(&obj->obj_node, list); > > If I understand the code correctly, obj (and thus *obj) are not > visible to readers before the hlist_nulls_add_head_rcu(). And > hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that > initialization (including both ->key and ->refcnt) is ordered before > list insertion. > > Except that this memory is being allocated from a slab cache that was > created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers > who gained a reference before this object was freed, and who still hold > their references. > > Unfortunately, the implementation of try_get_ref() is not shown. However, > if ->refcnt is non-zero, this can succeed, and if it succeeds, we need > the subsequent check of obj->key with key in the lookup algorithm to > be stable. For this check to be stable, try_get_ref() needs to use an > atomic operation with at least acquire semantics (kref_get_unless_zero() > would work), and this must pair with something in the initialization. > > So I don't see how it is safe to weaken that atomic_set_release() to > atomic_set(), even on x86. I totally agree, but only in the case of using hlist_nulls. That means, atomic_set_release() is not enough in the case without using hlist_nulls, we must ensure that storing to obj->next (in hlist_add_head_rcu) is ordered before storing to obj->key. Otherwise, we can get the new ‘next' and the old ‘key' in which case we can’t detect an object movement(from one chain to another). So, I’m afraid that the atomic_set_release() in insertion algorithm without using hlist_nulls should change back to: smp_wmb(); atomic_set(&obj->refcnt, 1); Thanks, Alan > > Or am I missing something subtle here? > > Thanx, Paul > >> Thanks, >> SJ >> >>> >>> For the other 3 patches, feel free to add: >>> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> >>> >>> thanks, >>> >>> - Joel
Hi SJ, > 2023年6月10日 08:20,SeongJae Park <sj@kernel.org> 写道: > > On Fri, 9 Jun 2023 16:42:59 -0700 "Paul E. McKenney" <paulmck@kernel.org> wrote: > >> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote: >>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@joelfernandes.org> wrote: >>> >>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: >>>>> >>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and >>>>> extra _release() in insert function when hlist_nulls is used. However, >>>>> the example code snippet for the insert function is still using the >>>>> extra _release(). Drop it. >>>>> >>>>> Signed-off-by: SeongJae Park <sj@kernel.org> >>>>> --- >>>>> Documentation/RCU/rculist_nulls.rst | 2 +- >>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>> >>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst >>>>> index 5cd6f3f8810f..463270273d89 100644 >>>>> --- a/Documentation/RCU/rculist_nulls.rst >>>>> +++ b/Documentation/RCU/rculist_nulls.rst >>>>> @@ -191,7 +191,7 @@ scan the list again without harm. >>>>> obj = kmem_cache_alloc(cachep); >>>>> lock_chain(); // typically a spin_lock() >>>>> obj->key = key; >>>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt >>>>> + atomic_set(&obj->refcnt, 1); >>>>> /* >>>>> * insert obj in RCU way (readers might be traversing chain) >>>>> */ >>>> >>>> If write to ->refcnt of 1 is reordered with setting of ->key, what >>>> prevents the 'lookup algorithm' from doing a key match (obj->key == >>>> key) before the refcount has been initialized? >>>> >>>> Are we sure the reordering mentioned in the document is the same as >>>> the reordering prevented by the atomic_set_release()? >>> >>> Paul, may I ask your opinion? >> >> The next line of code is this: >> >> hlist_nulls_add_head_rcu(&obj->obj_node, list); >> >> If I understand the code correctly, obj (and thus *obj) are not >> visible to readers before the hlist_nulls_add_head_rcu(). And >> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that >> initialization (including both ->key and ->refcnt) is ordered before >> list insertion. >> >> Except that this memory is being allocated from a slab cache that was >> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers >> who gained a reference before this object was freed, and who still hold >> their references. >> >> Unfortunately, the implementation of try_get_ref() is not shown. However, >> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need >> the subsequent check of obj->key with key in the lookup algorithm to >> be stable. For this check to be stable, try_get_ref() needs to use an >> atomic operation with at least acquire semantics (kref_get_unless_zero() >> would work), and this must pair with something in the initialization. >> >> So I don't see how it is safe to weaken that atomic_set_release() to >> atomic_set(), even on x86. > > Thank you for the nice explanation, and I agree. > >> >> Or am I missing something subtle here? > > I found the text is saying extra _release() in insert function is not > needed[1], and I thought it means the atomic_set_release(). Am I misreading > it? If not, would it be better to fix the text, for example, like below? The original text is: “With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() and extra smp_wmb() in insert function.” We can avoid the extra smp_wmb(), but the _release is required, As Paul said, >> Except that this memory is being allocated from a slab cache that was >> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers >> who gained a reference before this object was freed, and who still hold >> their references. Without the _release, we can get the old ‘key’ after the invocation of try_get_ref (although try_get_ref noticed the effect of atomic_set). Thanks, Alan > > ``` > --- a/Documentation/RCU/rculist_nulls.rst > +++ b/Documentation/RCU/rculist_nulls.rst > @@ -129,8 +129,7 @@ very very fast (before the end of RCU grace period) > Avoiding extra smp_rmb() > ======================== > > -With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup() > -and extra _release() in insert function. > +With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup(). > > For example, if we choose to store the slot number as the 'nulls' > end-of-list marker for each slot of the hash table, we can detect > @@ -182,6 +181,9 @@ scan the list again without harm. > 2) Insert algorithm > ------------------- > > +Same to the above one, but uses hlist_nulls_add_head_rcu() instead of > +hlist_add_head_rcu(). > + > :: > > /* > @@ -191,7 +193,7 @@ scan the list again without harm. > obj = kmem_cache_alloc(cachep); > lock_chain(); // typically a spin_lock() > obj->key = key; > - atomic_set_release(&obj->refcnt, 1); // key before refcnt > + atomic_set(&obj->refcnt, 1); > /* > * insert obj in RCU way (readers might be traversing chain) > */ > ``` > > [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/RCU/rculist_nulls.rst#n133 > > > Thanks, > SJ > >> >> Thanx, Paul >> >>> Thanks, >>> SJ >>> >>>> >>>> For the other 3 patches, feel free to add: >>>> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> >>>> >>>> thanks, >>>> >>>> - Joel
> 2023年6月10日 13:37,Alan Huang <mmpgouride@gmail.com> 写道: > > Hi Paul, > >> 2023年6月10日 07:42,Paul E. McKenney <paulmck@kernel.org> 写道: >> >> On Fri, Jun 09, 2023 at 07:12:06PM +0000, SeongJae Park wrote: >>> On Fri, 19 May 2023 14:52:50 -0400 Joel Fernandes <joel@joelfernandes.org> wrote: >>> >>>> On Thu, May 18, 2023 at 6:40 PM SeongJae Park <sj@kernel.org> wrote: >>>>> >>>>> The document says we can avoid extra smp_rmb() in lockless_lookup() and >>>>> extra _release() in insert function when hlist_nulls is used. However, >>>>> the example code snippet for the insert function is still using the >>>>> extra _release(). Drop it. >>>>> >>>>> Signed-off-by: SeongJae Park <sj@kernel.org> >>>>> --- >>>>> Documentation/RCU/rculist_nulls.rst | 2 +- >>>>> 1 file changed, 1 insertion(+), 1 deletion(-) >>>>> >>>>> diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst >>>>> index 5cd6f3f8810f..463270273d89 100644 >>>>> --- a/Documentation/RCU/rculist_nulls.rst >>>>> +++ b/Documentation/RCU/rculist_nulls.rst >>>>> @@ -191,7 +191,7 @@ scan the list again without harm. >>>>> obj = kmem_cache_alloc(cachep); >>>>> lock_chain(); // typically a spin_lock() >>>>> obj->key = key; >>>>> - atomic_set_release(&obj->refcnt, 1); // key before refcnt >>>>> + atomic_set(&obj->refcnt, 1); >>>>> /* >>>>> * insert obj in RCU way (readers might be traversing chain) >>>>> */ >>>> >>>> If write to ->refcnt of 1 is reordered with setting of ->key, what >>>> prevents the 'lookup algorithm' from doing a key match (obj->key == >>>> key) before the refcount has been initialized? >>>> >>>> Are we sure the reordering mentioned in the document is the same as >>>> the reordering prevented by the atomic_set_release()? >>> >>> Paul, may I ask your opinion? >> >> The next line of code is this: >> >> hlist_nulls_add_head_rcu(&obj->obj_node, list); >> >> If I understand the code correctly, obj (and thus *obj) are not >> visible to readers before the hlist_nulls_add_head_rcu(). And >> hlist_nulls_add_head_rcu() uses rcu_assign_pointer() to ensure that >> initialization (including both ->key and ->refcnt) is ordered before >> list insertion. >> >> Except that this memory is being allocated from a slab cache that was >> created with SLAB_TYPESAFE_BY_RCU. This means that there can be readers >> who gained a reference before this object was freed, and who still hold >> their references. >> >> Unfortunately, the implementation of try_get_ref() is not shown. However, >> if ->refcnt is non-zero, this can succeed, and if it succeeds, we need >> the subsequent check of obj->key with key in the lookup algorithm to >> be stable. For this check to be stable, try_get_ref() needs to use an >> atomic operation with at least acquire semantics (kref_get_unless_zero() >> would work), and this must pair with something in the initialization. >> >> So I don't see how it is safe to weaken that atomic_set_release() to >> atomic_set(), even on x86. > > I totally agree, but only in the case of using hlist_nulls. > > That means, atomic_set_release() is not enough in the case without using hlist_nulls, > we must ensure that storing to obj->next (in hlist_add_head_rcu) is ordered before storing Typo: not before, but after. > to obj->key. Otherwise, we can get the new ‘next' and the old ‘key' in which case we can’t detect > an object movement(from one chain to another). > > So, I’m afraid that the atomic_set_release() in insertion algorithm without using hlist_nulls should > change back to: > > smp_wmb(); > atomic_set(&obj->refcnt, 1); > > Thanks, > Alan > >> >> Or am I missing something subtle here? >> >> Thanx, Paul >> >>> Thanks, >>> SJ >>> >>>> >>>> For the other 3 patches, feel free to add: >>>> Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org> >>>> >>>> thanks, >>>> >>>> - Joel
diff --git a/Documentation/RCU/rculist_nulls.rst b/Documentation/RCU/rculist_nulls.rst index 5cd6f3f8810f..463270273d89 100644 --- a/Documentation/RCU/rculist_nulls.rst +++ b/Documentation/RCU/rculist_nulls.rst @@ -191,7 +191,7 @@ scan the list again without harm. obj = kmem_cache_alloc(cachep); lock_chain(); // typically a spin_lock() obj->key = key; - atomic_set_release(&obj->refcnt, 1); // key before refcnt + atomic_set(&obj->refcnt, 1); /* * insert obj in RCU way (readers might be traversing chain) */