Message ID | 20221122010421.3799681-8-paulmck@kernel.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel-owner@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1929867wrr; Mon, 21 Nov 2022 17:05:08 -0800 (PST) X-Google-Smtp-Source: AA0mqf7AxI1gAUkFMwFZsL2bnaPwZURW24ERsCFlFxdKwFa9x3wSCb5I21GiTQthtS6xdaAttPqD X-Received: by 2002:a63:4d01:0:b0:476:898c:deda with SMTP id a1-20020a634d01000000b00476898cdedamr1065739pgb.222.1669079108183; Mon, 21 Nov 2022 17:05:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669079108; cv=none; d=google.com; s=arc-20160816; b=tSNzShbAW3gRuIwQmkSUuZbTOvkn865ZZ+x3+Gut0vKYyo1Xo6Y0UA3ZXZ1hINaXtZ 7XB3uMs6DmH8pPOgtvS8Gq2c1OTxdviFlxtl3LX2JrTuHZbK+Rmad0R5wkflu1nLeFBP R4eaVfXnskkluKjg2AEz41+c6JPbob53wGVPe+Gf1XdybrApISUtbudQuPJCSMnpNTzG WJUV7UrbnwTOiF1jEHuRm7oAIsO9FlKQavprDGpY9lxiPB5XKUOWcKy10GxQ+PkVCZKZ yC7+xpWbpS0GnXyLfu+5KC72KIVMPNEFWsGtzq6JNh7o4kWXZytsUD74ARnMfYJYst1c aHAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=T+B3Wd3V/zk3IHc0EsSAng9ddSG+4DT5R4CU+3RmirU=; b=Hufhz+BH0KrtMZCO+lrbdbID4cwc5++LLP9oqjXGz/IpNmlUId6JFpULgMGmTYV7Hy whQRNdwrARSq0MIfDO58nAn/9kXyzn1FJ1xJQ6zIAyCVeGqllyzgV8jhYpvziapMqq+3 2VZY4IDKHspGLWvxpcPIZlCgRtXuggYQDbmZuCpBfxcxt75lBj23D4pPYdPTrjv3V54+ WfWQFPZ2zHnGkpOujkOwTL9Q0GbZJg560vxy23HrEmKW6U+PbRNL09VDdLHclKkaaL5r r1o4YmjpkRjmy4M/BYAjfasLlZWNw6buKH/bOOt8znG3wuYHzRaMQFMKUi+wPlnhfycs BxeA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ouiCNi1M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u14-20020a63470e000000b0044e12bf26ecsi13367416pga.465.2022.11.21.17.04.53; Mon, 21 Nov 2022 17:05:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ouiCNi1M; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232076AbiKVBEf (ORCPT <rfc822;cjcooper78@gmail.com> + 99 others); Mon, 21 Nov 2022 20:04:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231933AbiKVBEZ (ORCPT <rfc822;linux-kernel@vger.kernel.org>); Mon, 21 Nov 2022 20:04:25 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49D9D15804; Mon, 21 Nov 2022 17:04:24 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DA1D0614C7; Tue, 22 Nov 2022 01:04:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1A511C4314F; Tue, 22 Nov 2022 01:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669079063; bh=z9JjjzgTdogo9ezjaH1Kvhao6xVgeTuYWUcd1hbJd34=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ouiCNi1MquoLuOlDKxTEYf7uSk1aoeJVXXFQEgmoBoFB55aKExq8LU532Qtw1F2i9 icTvelfXLI8qLKPC1QyophlQshZT87N4wmcllBRKX60rg+hwLyCLcItLqgFBUugFU5 u5KJLARZ/rxaXegIpMETwQ+hG85w8RDUFy5rRn2rKJKFkzqP/ajNoG8kAnoVewObpK YJiwLIJMoPhBWBUAKJMkB4dmSM/m9gyGllbtvmXMHCNCnzn8tkQwWVW1fj9DSOH+II 27lgpW4F5ldo4aH+iVHDV3JVV+45e6OMjMpbNmrODFChzoHs+we9qOt6XKP6Gu52kS 2x+3g3Jk+ARvQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id 7F8865C13DD; Mon, 21 Nov 2022 17:04:22 -0800 (PST) From: "Paul E. McKenney" <paulmck@kernel.org> To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Joel Fernandes (Google)" <joel@joelfernandes.org>, "Paul E . McKenney" <paulmck@kernel.org>, Dennis Zhou <dennis@kernel.org>, Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>, linux-mm@kvack.org Subject: [PATCH v2 rcu 08/16] percpu-refcount: Use call_rcu_flush() for atomic switch Date: Mon, 21 Nov 2022 17:04:13 -0800 Message-Id: <20221122010421.3799681-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> References: <20221122010408.GA3799268@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: <linux-kernel.vger.kernel.org> X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750156294993312766?= X-GMAIL-MSGID: =?utf-8?q?1750156294993312766?= |
Series |
[v2,rcu,01/16] rcu: Simplify rcu_init_nohz() cpumask handling
|
|
Commit Message
Paul E. McKenney
Nov. 22, 2022, 1:04 a.m. UTC
From: "Joel Fernandes (Google)" <joel@joelfernandes.org> Earlier commits in this series allow battery-powered systems to build their kernels with the default-disabled CONFIG_RCU_LAZY=y Kconfig option. This Kconfig option causes call_rcu() to delay its callbacks in order to batch callbacks. This means that a given RCU grace period covers more callbacks, thus reducing the number of grace periods, in turn reducing the amount of energy consumed, which increases battery lifetime which can be a very good thing. This is not a subtle effect: In some important use cases, the battery lifetime is increased by more than 10%. This CONFIG_RCU_LAZY=y option is available only for CPUs that offload callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y. Delaying callbacks is normally not a problem because most callbacks do nothing but free memory. If the system is short on memory, a shrinker will kick all currently queued lazy callbacks out of their laziness, thus freeing their memory in short order. Similarly, the rcu_barrier() function, which blocks until all currently queued callbacks are invoked, will also kick lazy callbacks, thus enabling rcu_barrier() to complete in a timely manner. However, there are some cases where laziness is not a good option. For example, synchronize_rcu() invokes call_rcu(), and blocks until the newly queued callback is invoked. It would not be a good for synchronize_rcu() to block for ten seconds, even on an idle system. Therefore, synchronize_rcu() invokes call_rcu_flush() instead of call_rcu(). The arrival of a non-lazy call_rcu_flush() callback on a given CPU kicks any lazy callbacks that might be already queued on that CPU. After all, if there is going to be a grace period, all callbacks might as well get full benefit from it. Yes, this could be done the other way around by creating a call_rcu_lazy(), but earlier experience with this approach and feedback at the 2022 Linux Plumbers Conference shifted the approach to call_rcu() being lazy with call_rcu_flush() for the few places where laziness is inappropriate. And another call_rcu() instance that cannot be lazy is the one on the percpu refcounter's "per-CPU to atomic switch" code path, which uses RCU when switching to atomic mode. The enqueued callback wakes up waiters waiting in the percpu_ref_switch_waitq. Allowing this callback to be lazy would result in unacceptable slowdowns for users of per-CPU refcounts, such as blk_pre_runtime_suspend(). Therefore, make __percpu_ref_switch_to_atomic() use call_rcu_flush() in order to revert to the old behavior. Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Cc: Dennis Zhou <dennis@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Christoph Lameter <cl@linux.com> Cc: <linux-mm@kvack.org> --- lib/percpu-refcount.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index e5c5315da2741..65c58a029297d 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -230,7 +230,8 @@ static void __percpu_ref_switch_to_atomic(struct percpu_ref *ref, percpu_ref_noop_confirm_switch; percpu_ref_get(ref); /* put after confirmation */ - call_rcu(&ref->data->rcu, percpu_ref_switch_to_atomic_rcu); + call_rcu_flush(&ref->data->rcu, + percpu_ref_switch_to_atomic_rcu); } static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref)