From patchwork Fri Sep 8 20:35:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Frederic Weisbecker X-Patchwork-Id: 137800 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:ab0a:0:b0:3f2:4152:657d with SMTP id m10csp808815vqo; Fri, 8 Sep 2023 14:51:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFux12xZ7yvNW9N7iWuFzovFrvxVDmfA9LNn5lsHUBtqwgdU1P6ayqNRPj0WjMNwDRFGW2H X-Received: by 2002:a17:903:1c1:b0:1bd:fa80:103d with SMTP id e1-20020a17090301c100b001bdfa80103dmr4556453plh.25.1694209864594; Fri, 08 Sep 2023 14:51:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694209864; cv=none; d=google.com; s=arc-20160816; b=YhVVMa3D2JtuIPOFR4j/MCFxndkqkzbN9jevGqDt67Zs7/Hvz3kN/EZKxCMmIGzJA9 EP79TomYtRUndnTwob+qMLvWGQt3geNgclJoiqVjoXK7TME4ypkGKJWIYzKAbh8Y9alm 5chCOT2OAsyXauEfOGUtZLjlnKfFiNaWV4gy7Pn81g/1kxkhqmFluyrklFJtFMi/SECm w+UQc/qBw7DNFZXg6FILqOveJl2Y/d+A2XC+9Ux0clPuZWrDK7nzi/eE8oCPRtI4/fCu zRrwDUXSn9PQ+7BmadsFc6Vl1CQohU5kwFBs9YZtgZQRIwXTAjhCDW1nDdtQiSEjc3UJ 4wxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Lw7nh+LXgSMi1dNo/nbt7E096s4QKdiF2fxwdKN2QZ4=; fh=b/0dXqvz0kAo+07ch4KNz5dZ+JhJoOmBzkQIhQsuGPg=; b=Qy1VUy+5ZcP8uoe20W9B58Ffd+8fgftoGbgozIH/54FzpuY7SO2T3WPLDRVBdrMpYH IoAN4Xn6TjJnjD4pdhXpry16Bk2Gecesqy//d14OQc0xLvReHlVl4nK7LlyR2F2cYHJ0 n25doaX4MryDRb3Q1/Z2qnkroV4P0y3gfj5fszQgmGYl/UC0FM5exmzh9t1Yi/sxkc6+ 624a/5NTngXHW3KI5gx6O49hR5OGhyrjkClrfUXx5s4rzlyUOImFO+T074xjtonOqNcS w+F//0BV1Gf0CKi6TJ9P9h4g8iSKcqoctw27D8jInvh5DUWgXRt06cTs63excNmQhggU aUQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=cCmajFEM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u2-20020a17090282c200b001b67bdc438csi2137358plz.376.2023.09.08.14.50.57; Fri, 08 Sep 2023 14:51:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=cCmajFEM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344101AbjIHUgb (ORCPT + 39 others); Fri, 8 Sep 2023 16:36:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344135AbjIHUg1 (ORCPT ); Fri, 8 Sep 2023 16:36:27 -0400 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 598E3C4; Fri, 8 Sep 2023 13:36:23 -0700 (PDT) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18509C433B9; Fri, 8 Sep 2023 20:36:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1694205383; bh=GUT92MKTD1zv1ELbTZztdEp1nZcVqJTBRlzdjoDETLs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cCmajFEMvoh8fTOLErYAwt+nF4hfVHNs6QQNNM/2Y8GtU1lnueDle9kgxUkMdxyby DOv1MXqFeu+ZgrAlsLMXbEGswqAyH9ONebKnlQ7L48/FEeGD5EctI+M7bZsHCWRPXy RRuE5dO1nT0G/fxa4WuI2+XtD1nMAWDglSuzgODLlxHwQ7p6eCQA1J9WQ1pZfz4ZUq vh1loZHLvNHYNxGyO8d1W4S7G0Z+GKq21flczenkRJZyt0obo0vypJ2mL6wtMRAYBx /sSjXr+DxA0M9ffpOqnb70Tuj01OKYMRPqqJw8j1dACnMf+MIF4fhnVttQVqFemWqP X+Y93EB2Ad+Eg== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Frederic Weisbecker , rcu , Uladzislau Rezki , Neeraj Upadhyay , Boqun Feng , Joel Fernandes Subject: [PATCH 04/10] rcu/nocb: Remove needless full barrier after callback advancing Date: Fri, 8 Sep 2023 22:35:57 +0200 Message-ID: <20230908203603.5865-5-frederic@kernel.org> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230908203603.5865-1-frederic@kernel.org> References: <20230908203603.5865-1-frederic@kernel.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1776507803041482964 X-GMAIL-MSGID: 1776507803041482964 A full barrier is issued from nocb_gp_wait() upon callbacks advancing to order grace period completion with callbacks execution. However these two events are already ordered by the smp_mb__after_unlock_lock() barrier within the call to raw_spin_lock_rcu_node() that is necessary for callbacks advancing to happen. The following litmus test shows the kind of guarantee that this barrier provides: C smp_mb__after_unlock_lock {} // rcu_gp_cleanup() P0(spinlock_t *rnp_lock, int *gpnum) { // Grace period cleanup increase gp sequence number spin_lock(rnp_lock); WRITE_ONCE(*gpnum, 1); spin_unlock(rnp_lock); } // nocb_gp_wait() P1(spinlock_t *rnp_lock, spinlock_t *nocb_lock, int *gpnum, int *cb_ready) { int r1; // Call rcu_advance_cbs() from nocb_gp_wait() spin_lock(nocb_lock); spin_lock(rnp_lock); smp_mb__after_unlock_lock(); r1 = READ_ONCE(*gpnum); WRITE_ONCE(*cb_ready, 1); spin_unlock(rnp_lock); spin_unlock(nocb_lock); } // nocb_cb_wait() P2(spinlock_t *nocb_lock, int *cb_ready, int *cb_executed) { int r2; // rcu_do_batch() -> rcu_segcblist_extract_done_cbs() spin_lock(nocb_lock); r2 = READ_ONCE(*cb_ready); spin_unlock(nocb_lock); // Actual callback execution WRITE_ONCE(*cb_executed, 1); } P3(int *cb_executed, int *gpnum) { int r3; WRITE_ONCE(*cb_executed, 2); smp_mb(); r3 = READ_ONCE(*gpnum); } exists (1:r1=1 /\ 2:r2=1 /\ cb_executed=2 /\ 3:r3=0) (* Bad outcome. *) Here the bad outcome only occurs if the smp_mb__after_unlock_lock() is removed. This barrier orders the grace period completion against callbacks advancing and even later callbacks invocation, thanks to the opportunistic propagation via the ->nocb_lock to nocb_cb_wait(). Therefore the smp_mb() placed after callbacks advancing can be safely removed. Signed-off-by: Frederic Weisbecker --- kernel/rcu/tree_nocb.h | 1 - 1 file changed, 1 deletion(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index 6e63ba4788e1..2dc76f5e6e78 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -779,7 +779,6 @@ static void nocb_gp_wait(struct rcu_data *my_rdp) if (rcu_segcblist_ready_cbs(&rdp->cblist)) { needwake = rdp->nocb_cb_sleep; WRITE_ONCE(rdp->nocb_cb_sleep, false); - smp_mb(); /* CB invocation -after- GP end. */ } else { needwake = false; }