From patchwork Thu Mar 30 22:47:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77441 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp184052vqo; Thu, 30 Mar 2023 15:58:04 -0700 (PDT) X-Google-Smtp-Source: AK7set+HI9Vtalmu+cabwEebkxT1vZl1LmzVHel+VW/gLa7uZpUloR8sOeke7g1lVAYXw03SxGGS X-Received: by 2002:a05:6a20:4fa9:b0:d5:2f2a:ead4 with SMTP id gh41-20020a056a204fa900b000d52f2aead4mr20790470pzb.47.1680217084393; Thu, 30 Mar 2023 15:58:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217084; cv=none; d=google.com; s=arc-20160816; b=nVBWqR1KcTlTM2ZhlRSa4FCkDZsnd6sHYKUmxwqvLuJYczPjEWju/7n+GM07gP5C+c wgDBQYdaYgE/YE1yk5E0zixLKaRdRCsdcFL319dotSXeBrsrVoAExpW+j1C74IbJaicu RSJlJ6a9EpYkbIEFxfEcuNCYMPCA9J9RW1lAag5gncsEtEs/tfto3tFOrE20vS6TIHHz /BDUm7y5IvU8+7cdq0mS/r/8GanfJRDWJqAMgWsNj8/4I4hfeEsgC9jL+1z1y3nCOgxM F3zfTRYcklJfNxT4Mk6KtGruU3ANWip1IKrRC5GnOUzkC4gxYMo+s7jwZU9QobFK+84r n87g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JLaVSjac8oy2ty09BvFiKanF9YZg4sgzCDLFqZ4V5vY=; b=FehmtCdfgdElvZp9JKE/RopdYWYmSBnIHJ/CglJod2K1ut24LD0TykqTX5GN4wborb bKVNIIVNwPe9qJLpe8TQv1PQyQRHPJxbOVfAiGQNpIJw1bjdse1l/TPOspK/fTWU5ZQp 8bWnZvZnDYvZpveL7fttbKcdCESlqEGjSM4wUJmylfyNOCtUQyoPUkqhSL35I9JP+KXa 5YceGGeNgFWACkkDrqOa5Yil178wzu+h9AVPHezNAOZ+bnzEXr7/tFu9dhEvJ6XX825d yLZC7cYNeaGYUU4V3nWYKu/2ckb+hbfTAaRJM23kR/ZTkoGjJ4mkwtCe9IWowFvmRxKR DrzQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=u+uDNVTC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 64-20020a620443000000b005a8efbddf5esi824292pfe.238.2023.03.30.15.57.51; Thu, 30 Mar 2023 15:58:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=u+uDNVTC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231181AbjC3WsC (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231171AbjC3Wru (ORCPT ); Thu, 30 Mar 2023 18:47:50 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3ABEF1114B; Thu, 30 Mar 2023 15:47:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E2918621BB; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52401C433D2; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=E9GTy32jkGXjdmoTYxiGB6g5bP4dr0pfdets2Um+Or8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u+uDNVTC3lJmsZ4zrN0/meReh7P86RPhejFgW8UZcT8mLI8U9KWrnSLNGCwdTcDAO Jr4q4p6UXtOQjklFwhWVZVW+AmxW8rK/+l3Cpqxpr2j0gZWsXu2GhFH+FHd/LUR0pn nZkYmpfbiN37yMvPvJm+1/1OBsSIQ6Evh7AWyTPu0+9LB+j/TIgD4CE9o/gLVzqrjz bT7iytN6w8QjGpDaHda8rYf8oAKbjjECZlHmwZRSkNo1z3oamujrB4vi8Wr2v2y2f1 ctdA3zdUXvEemNfUu3pX5YpKew8TZd4XlmEVhr387auhS7tIWaYMwgPVd+rskpW6YD efUAYG28klMKQ== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id EC2791540476; Thu, 30 Mar 2023 15:47:27 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , kernel test robot Subject: [PATCH rcu 01/20] rcu-tasks: Fix warning for unused tasks_rcu_exit_srcu Date: Thu, 30 Mar 2023 15:47:07 -0700 Message-Id: <20230330224726.662344-1-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835309148184521?= X-GMAIL-MSGID: =?utf-8?q?1761835309148184521?= The tasks_rcu_exit_srcu variable is used only by kernels built with CONFIG_TASKS_RCU=y, but is defined for all kernesl with CONFIG_TASKS_RCU_GENERIC=y. Therefore, in kernels built with CONFIG_TASKS_RCU_GENERIC=y but CONFIG_TASKS_RCU=n, this gives a "defined but not used" warning. This commit therefore moves this variable under CONFIG_TASKS_RCU. Link: https://lore.kernel.org/oe-kbuild-all/202303191536.XzMSyzTl-lkp@intel.com/ Reported-by: kernel test robot Signed-off-by: Paul E. McKenney Reviewed-by: Frederic Weisbecker --- kernel/rcu/tasks.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index bfb5e1549f2b..185358c2f08d 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -136,8 +136,10 @@ static struct rcu_tasks rt_name = \ .kname = #rt_name, \ } +#ifdef CONFIG_TASKS_RCU /* Track exiting tasks in order to allow them to be waited for. */ DEFINE_STATIC_SRCU(tasks_rcu_exit_srcu); +#endif /* Avoid IPIing CPUs early in the grace period. */ #define RCU_TASK_IPI_DELAY (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB) ? HZ / 2 : 0) From patchwork Thu Mar 30 22:47:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77449 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp188608vqo; Thu, 30 Mar 2023 16:05:26 -0700 (PDT) X-Google-Smtp-Source: AKy350aDkUISL5mJz/QCiHSJAGkndc2S0Ocy+W7lRMV0UaT/ARU+MhRKL0Y6OMrrAfleSODBFIbg X-Received: by 2002:a17:907:8b8b:b0:93e:2289:ecc0 with SMTP id tb11-20020a1709078b8b00b0093e2289ecc0mr22183298ejc.42.1680217526020; Thu, 30 Mar 2023 16:05:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217526; cv=none; d=google.com; s=arc-20160816; b=VpOY/Dt7YT00M/io47UqqurC2JQcAvsOdpoU6Am6J43ZysB07sjgucR+Svfz8bOjD0 H+cExeccld8nCmEvdX3gzSDiXU6tFffI+lT4EpaUwOMhT9bYoEtmnwmx12pBpNx8c7lu 2HmUC51ZODX5BJl7TSxOR4v3RG1sLjia+G5fi6BLOVIrPTY5yHDfb+884COxQc/+iSX5 +231omLOxUxryGzhfuIiCa16yjnvpdxS42ZA5FY9HRlE6c2lA+DpV4BURYmIWW249Qvb 5Jfa6Iihxc/w+0gOkZdgp84sPfAxExXdYERY440Fdv3hAFR0+iwSkmQ252xubxtJRqWk 0Umw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CG8Qaz4yoUOmXmNbxUpR28LSX4dZ5IZYQoxJXWpMNYQ=; b=lTmGtZyfJsWi8CMzEZpAp106Ysj94turcZz5XMFLvxZD4ZzjmgdhQ0ZsTsKBZBuA+U r7452FppdViwpw0O6uGASRao5KlYbun9noNDndqZlvvAsJtR3181zaFkr6CubIkK7lOl 9Cil/kd7bdF5laCXwV5RfOFACW4Co1yS2QUFsiry9HKXS/vZxS1j8N4L/bxpoz+6iZRF 5eMKaT6MTD9v8CNNKgJl0LSo/1eT3mbIT4oNSwWt5LANVGnXOWI0iLHGJPKe3i6fGEuH DI6+X5mAGbjRxOx+vnN2wmTyXJ6yJkY97Q0I4Zfbj0e3oAxGrMZ8c7Lxqv/3Fp7h99iq Vb+w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=gzb039to; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q9-20020a170906388900b0093244ae0d63si483629ejd.179.2023.03.30.16.05.01; Thu, 30 Mar 2023 16:05:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=gzb039to; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231328AbjC3WsN (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43244 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231199AbjC3Wry (ORCPT ); Thu, 30 Mar 2023 18:47:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 731F21205E; Thu, 30 Mar 2023 15:47:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id A2C8EB82A5C; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AFEFC433EF; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=K70gMvYhGvFQm2RTfPHGhx5QhVRNYr3RZRGehe7G6d8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gzb039to+vuZqj9zxtsNtJiv/Jsjbh/tlLhFhdaz59TRzIlvnTMqtwQGi/cv4d3j7 0SQqRgI/QC5DGAS8eL9CqG1G/0ZkA0Z/5DyaT4BEeZ6Omc2c1c0f6dvhtc3hexdSYX Shh4vZ4G8t4XAfhkml9ADN8A0FfnmfHYhEud1kVYTnxxU67OkHHkb50KtcydTpOhY+ fieZ49gHssFXVWCaHV0gPIr1ZjxUckCAto0HF0kYlUDcgcWi9y0p8ygakBLs7ME4qM 1cOS+jQ4nAUJcSuBCN19LJH9rz/b8ZL8FnIticHqKXgBbmOMi4C50YUbfz1nO4qJpz UCGsj7+uYdgwQ== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id F1057154047E; Thu, 30 Mar 2023 15:47:27 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 02/20] srcu: Add whitespace to __SRCU_STRUCT_INIT() & __DEFINE_SRCU() Date: Thu, 30 Mar 2023 15:47:08 -0700 Message-Id: <20230330224726.662344-2-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835772679294052?= X-GMAIL-MSGID: =?utf-8?q?1761835772679294052?= This is a whitespace-only commit with no change in functionality. Its purpose is to prepare for later commits that: (1) Cause statically allocated srcu_struct structures to rely on compile-time initialization and (2) Move fields from the srcu_struct structure to a new srcu_usage structure. Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 558057b517b7..488d0e5d1ba3 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -108,13 +108,13 @@ struct srcu_struct { #define SRCU_STATE_SCAN1 1 #define SRCU_STATE_SCAN2 2 -#define __SRCU_STRUCT_INIT(name, pcpu_name) \ -{ \ - .sda = &pcpu_name, \ - .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ - .srcu_gp_seq_needed = -1UL, \ - .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ - __SRCU_DEP_MAP_INIT(name) \ +#define __SRCU_STRUCT_INIT(name, pcpu_name) \ +{ \ + .sda = &pcpu_name, \ + .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ + .srcu_gp_seq_needed = -1UL, \ + .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ + __SRCU_DEP_MAP_INIT(name) \ } /* @@ -137,15 +137,15 @@ struct srcu_struct { * See include/linux/percpu-defs.h for the rules on per-CPU variables. */ #ifdef MODULE -# define __DEFINE_SRCU(name, is_static) \ - is_static struct srcu_struct name; \ - extern struct srcu_struct * const __srcu_struct_##name; \ - struct srcu_struct * const __srcu_struct_##name \ +# define __DEFINE_SRCU(name, is_static) \ + is_static struct srcu_struct name; \ + extern struct srcu_struct * const __srcu_struct_##name; \ + struct srcu_struct * const __srcu_struct_##name \ __section("___srcu_struct_ptrs") = &name #else -# define __DEFINE_SRCU(name, is_static) \ - static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data); \ - is_static struct srcu_struct name = \ +# define __DEFINE_SRCU(name, is_static) \ + static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data); \ + is_static struct srcu_struct name = \ __SRCU_STRUCT_INIT(name, name##_srcu_data) #endif #define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */) From patchwork Thu Mar 30 22:47:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77435 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp181927vqo; Thu, 30 Mar 2023 15:52:52 -0700 (PDT) X-Google-Smtp-Source: AKy350Zck2fobvJq1kDzrkpHbKrNeMBxmLy3+GNXiZ/bbbUBQjuvT06ILtRRON9rS82s0HRoTSBs X-Received: by 2002:a17:906:6849:b0:930:e3a0:8636 with SMTP id a9-20020a170906684900b00930e3a08636mr25103698ejs.57.1680216772385; Thu, 30 Mar 2023 15:52:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680216772; cv=none; d=google.com; s=arc-20160816; b=bpL1g7dAp2rUfGbMzZwiYBo5aO2F/iAxoB/4Y0lM2g4gmZyi8eP8kxkqSvNBzjFCcw r/RyYCuAUzOCMD/h5g0tVGubUCe0R6SgOprjsuMbj8g3DXg8jhIa2L3vg4aymNMrJyYV AdPe1dosRTvzzZpTL9hEqo3NWhXykhleOXl1EVzrnuHZNxv39Hjkhuc2SSi9Sk3jybWl cq59STEs9cree4n9VuKmtiG/8xPziHM9Ftm31OolDVdpJRjwSwGGBGeSC6lVQZFWtdgc YxSwZn1A4VHSV+GmnR5B2IMsOy9FFgs2IzcOzv1c9wvF4vZYNdnvS+BTUbc/i/a0iEhZ qY4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=WIhNmGSgYMRvXjM2mFyH0PmpLn/VOx8ZjzPznK29rRA=; b=d4aomN43X6ss1eh2apyOwNG+kj4UTgS7Di4APanKQiQjoWs8bYVSCULQXIBwlTkgfE smpC8/vTxTldWZazBSAELKQLhGF7nGfWZ5dc86b7BSbl4AlfPGw4Mzji/XqnhMSaMUCg 7UCYftAjPDdJpQdTj88CrEI16S5AKmCP86ZOrs7Kst4FziHQ3Vih7/KnUELmOX+OJ8NM +9oTedELOvm/PZZZ1lmaClVb326Y5e8BbpzUXVBLA5RZ9pm8kTrXKfdwMnI9bFAdIyAm qD5tVgOK+20gwOpWS/JNrIhMNaKWuwgDjDw0AhSVVJHf0QSNL/AhNswwdvymd3cqb/4l 5Mng== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=BYu6zzK4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dk5-20020a170906f0c500b0092533cf4684si662712ejb.961.2023.03.30.15.52.28; Thu, 30 Mar 2023 15:52:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=BYu6zzK4; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231265AbjC3WsE (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231178AbjC3Wrv (ORCPT ); Thu, 30 Mar 2023 18:47:51 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3BC041204E; Thu, 30 Mar 2023 15:47:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0B8DF621D5; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67DF5C4339B; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=GXD9GdY8v3RIh7U3GnYda6kTJ4k3Fbay0wPf8sbul44=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BYu6zzK4uYWEr++FJTeAbZu8VihfPXtIwVPZT5hBLXmEM9za5DTsHAHQhiNbBmMj9 L3fbdQ9jOu0yHiKVv+41VBbUKiLvsQludNOsrA//E98fsUrVXp2RIMbiQ6FLRI+ZaH EztDJm+zyYozrYWwqA/rP2YHK9JlofEbIwLrAKxOF+mL45+UZCT4qoZv6ba3FeI1QO JEfnI65bUm5/dxd+JVgnp5MlVJUB3gMC/eUo8L2F1RJ+oCZDOE0QQlu/tQ4bJGGv1d 4qR8SFwFXUeSZx+47sZQilv0AOfKWMTCZZQwD9xU/Fi0LhBwlJn64JAolAFiOOXUDr fjMGKWq3BTvAA== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 0455D154047F; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 03/20] srcu: Use static init for statically allocated in-module srcu_struct Date: Thu, 30 Mar 2023 15:47:09 -0700 Message-Id: <20230330224726.662344-3-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761834982093567235?= X-GMAIL-MSGID: =?utf-8?q?1761834982093567235?= Further shrinking the srcu_struct structure is eased by requiring that in-module srcu_struct structures rely more heavily on static initialization. In particular, this preserves the property that a module-load-time srcu_struct initialization can fail only due to memory-allocation failure of the per-CPU srcu_data structures. It might also slightly improve robustness by keeping the number of memory allocations that must succeed down percpu_alloc() call. This is in preparation for splitting an srcu_usage structure out of the srcu_struct structure. [ paulmck: Fold in qiang1.zhang@intel.com feedback. ] Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 19 ++++++++++++++----- kernel/rcu/srcutree.c | 19 +++++++++++++------ 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 488d0e5d1ba3..3ce6deee1dbe 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -108,15 +108,24 @@ struct srcu_struct { #define SRCU_STATE_SCAN1 1 #define SRCU_STATE_SCAN2 2 -#define __SRCU_STRUCT_INIT(name, pcpu_name) \ -{ \ - .sda = &pcpu_name, \ +#define __SRCU_STRUCT_INIT_COMMON(name) \ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ .srcu_gp_seq_needed = -1UL, \ .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ - __SRCU_DEP_MAP_INIT(name) \ + __SRCU_DEP_MAP_INIT(name) + +#define __SRCU_STRUCT_INIT_MODULE(name) \ +{ \ + __SRCU_STRUCT_INIT_COMMON(name) \ } +#define __SRCU_STRUCT_INIT(name, pcpu_name) \ +{ \ + .sda = &pcpu_name, \ + __SRCU_STRUCT_INIT_COMMON(name) \ +} + + /* * Define and initialize a srcu struct at build time. * Do -not- call init_srcu_struct() nor cleanup_srcu_struct() on it. @@ -138,7 +147,7 @@ struct srcu_struct { */ #ifdef MODULE # define __DEFINE_SRCU(name, is_static) \ - is_static struct srcu_struct name; \ + is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name); \ extern struct srcu_struct * const __srcu_struct_##name; \ struct srcu_struct * const __srcu_struct_##name \ __section("___srcu_struct_ptrs") = &name diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index ab4ee58af84b..7e6e7dfb1a87 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1873,13 +1873,14 @@ void __init srcu_init(void) static int srcu_module_coming(struct module *mod) { int i; + struct srcu_struct *ssp; struct srcu_struct **sspp = mod->srcu_struct_ptrs; - int ret; for (i = 0; i < mod->num_srcu_structs; i++) { - ret = init_srcu_struct(*(sspp++)); - if (WARN_ON_ONCE(ret)) - return ret; + ssp = *(sspp++); + ssp->sda = alloc_percpu(struct srcu_data); + if (WARN_ON_ONCE(!ssp->sda)) + return -ENOMEM; } return 0; } @@ -1888,10 +1889,16 @@ static int srcu_module_coming(struct module *mod) static void srcu_module_going(struct module *mod) { int i; + struct srcu_struct *ssp; struct srcu_struct **sspp = mod->srcu_struct_ptrs; - for (i = 0; i < mod->num_srcu_structs; i++) - cleanup_srcu_struct(*(sspp++)); + for (i = 0; i < mod->num_srcu_structs; i++) { + ssp = *(sspp++); + if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_gp_seq_needed)) && + !WARN_ON_ONCE(!ssp->sda_is_static)) + cleanup_srcu_struct(ssp); + free_percpu(ssp->sda); + } } /* Handle one module, either coming or going. */ From patchwork Thu Mar 30 22:47:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77444 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp185737vqo; Thu, 30 Mar 2023 16:01:18 -0700 (PDT) X-Google-Smtp-Source: AK7set/hq4RdiNdk4raOyIxwLZQVCSkueH17GavFWpwV3u7DcDKWoleK/IKqJ8MAcNsXS+A0FMBz X-Received: by 2002:a05:6a20:c28a:b0:c7:770a:557f with SMTP id bs10-20020a056a20c28a00b000c7770a557fmr19949257pzb.50.1680217278563; Thu, 30 Mar 2023 16:01:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217278; cv=none; d=google.com; s=arc-20160816; b=LNww9TR0EMUfXZOjKNhswiaeFPEGyktRGL6zc2vagnEJPZNjufTrBu6NUzyWpJ3BlB j9BPUNher4uXY7L6eOWUUdXeXhISbyAQtuje5ESsg3MhMH2otSZ0LELJMh0kc8xlixKL psqQZPp0FEha42SyXO1SpzCeRPMh74SFC6OFeoZFsR07VcaGjpFG2/GPurnv5Apq4jpw vV06hVOu+JI2zGvZxBHyK2cHNbqFH1v10fPmCDoDdvj5016FciTzDTRBHSwke+LKr3lU IHcpKDHv0R0wfUkNC5/Iyv7OKhzmV9eO6oklyPQKW+XtyerBkj6MH76kcrNzFvgJnPW4 FR7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=atlKRIRrTWgjv7qj4Md63N1H4Oaw43Czm82vLJThqkc=; b=FpoqO49woWWzUhiLx7s08JDCd+0RxXy2ah2HqAyjp5tKpKAm9PoLtVX5Hm2gF7L2/x 5b+gd7aKptXKgeuiyJAgzD9nVgLJRBGo5hhuUlAWQrF+XTbHoNlA3FIllgLD72jXGYU7 kqXRa5cMJE2K4lOLbTBz+O7Y7yCYlYHIU36ZFf55iEIsV+Iy28ZgMGEVjc8eM1UJxf0S gt24kK1X5rufIy4/1snunqC+ZvX7l3mc0cs78rYlA/Q0ke+u+6jWHS+8IF7mFEnFXpLT oLyjGDIH2o8Txlgnnatq9VCX4CZ7eJAfjV3HR5o8PhQWpbv0c1wiBvyYmjbzcRNpL0cn ClAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Qe1o5fJq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c69-20020a621c48000000b00625488ab85esi923761pfc.4.2023.03.30.16.01.02; Thu, 30 Mar 2023 16:01:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Qe1o5fJq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231337AbjC3WsQ (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230473AbjC3Wry (ORCPT ); Thu, 30 Mar 2023 18:47:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E9E611679; Thu, 30 Mar 2023 15:47:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DD61EB82A63; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7EFE6C4339E; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=baiqNfiKN0xK0xnAMrWYTsGtF5Y9bktVViRp7Wy3+IM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qe1o5fJqFTuNkTZYNW7neS2oN7cqpKc+QhxHtqm+hDE5uO9iJGb5RJ+4Vb8kuA7tU SskRaxs1yhTwgq9X/7RR8ugB6SRSE2mb86h7Q7Q1Hr/z2jii81L/bV+ZCq/DyIelmi ieEenNQzLOaQFs/nQI1YQAt5tuECO7bOl8PgTtlDG1X++nr4ivdfzkG2/bbfR1baUC LPpxh16Gjq+/XtBprHTQAG8FiHejjRFy/rE4axaxnKXdzxUhJTjkM8mgngstI1VDAr FS8v2CXgr7UGvSzFnWemT6i7mIz5Eb3BQUMUmc8QoHWvp4CMg4Z2nNwQVC9Et3ck5+ Xc6GSvNnD8NqA== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 0A7FD1540480; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , "Rafael J. Wysocki" , =?utf-8?b?TWljaGHFgiBN?= =?utf-8?b?aXJvc8WCYXc=?= , Dmitry Osipenko , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 04/20] srcu: Begin offloading srcu_struct fields to srcu_update Date: Thu, 30 Mar 2023 15:47:10 -0700 Message-Id: <20230330224726.662344-4-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS,T_FILL_THIS_FORM_SHORT autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835513356511022?= X-GMAIL-MSGID: =?utf-8?q?1761835513356511022?= The current srcu_struct structure is on the order of 200 bytes in size (depending on architecture and .config), which is much better than the old-style 26K bytes, but still all too inconvenient when one is trying to achieve good cache locality on a fastpath involving SRCU readers. However, only a few fields in srcu_struct are used by SRCU readers. The remaining fields could be offloaded to a new srcu_update structure, thus shrinking the srcu_struct structure down to a few tens of bytes. This commit begins this noble quest, a quest that is complicated by open-coded initialization of the srcu_struct within the srcu_notifier_head structure. This complication is addressed by updating the srcu_notifier_head structure's open coding, given that there does not appear to be a straightforward way of abstracting that initialization. This commit moves only the ->node pointer to srcu_update. Later commits will move additional fields. [ paulmck: Fold in qiang1.zhang@intel.com's memory-leak fix. ] Link: https://lore.kernel.org/all/20230320055751.4120251-1-qiang1.zhang@intel.com/ Suggested-by: Christoph Hellwig Cc: "Rafael J. Wysocki" Cc: "Michał Mirosław" Cc: Dmitry Osipenko Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Acked-by: Rafael J. Wysocki Signed-off-by: Paul E. McKenney --- include/linux/notifier.h | 5 ++++- include/linux/srcutiny.h | 6 +++--- include/linux/srcutree.h | 27 ++++++++++++++++++--------- kernel/rcu/rcu.h | 6 ++++-- kernel/rcu/srcutree.c | 28 +++++++++++++++++++--------- 5 files changed, 48 insertions(+), 24 deletions(-) diff --git a/include/linux/notifier.h b/include/linux/notifier.h index aef88c2d1173..2aba75145144 100644 --- a/include/linux/notifier.h +++ b/include/linux/notifier.h @@ -73,6 +73,9 @@ struct raw_notifier_head { struct srcu_notifier_head { struct mutex mutex; +#ifdef CONFIG_TREE_SRCU + struct srcu_usage srcuu; +#endif struct srcu_struct srcu; struct notifier_block __rcu *head; }; @@ -107,7 +110,7 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh); { \ .mutex = __MUTEX_INITIALIZER(name.mutex), \ .head = NULL, \ - .srcu = __SRCU_STRUCT_INIT(name.srcu, pcpu), \ + .srcu = __SRCU_STRUCT_INIT(name.srcu, name.srcuu, pcpu), \ } #define ATOMIC_NOTIFIER_HEAD(name) \ diff --git a/include/linux/srcutiny.h b/include/linux/srcutiny.h index 5aa5e0faf6a1..ebd72491af99 100644 --- a/include/linux/srcutiny.h +++ b/include/linux/srcutiny.h @@ -31,7 +31,7 @@ struct srcu_struct { void srcu_drive_gp(struct work_struct *wp); -#define __SRCU_STRUCT_INIT(name, __ignored) \ +#define __SRCU_STRUCT_INIT(name, __ignored, ___ignored) \ { \ .srcu_wq = __SWAIT_QUEUE_HEAD_INITIALIZER(name.srcu_wq), \ .srcu_cb_tail = &name.srcu_cb_head, \ @@ -44,9 +44,9 @@ void srcu_drive_gp(struct work_struct *wp); * Tree SRCU, which needs some per-CPU data. */ #define DEFINE_SRCU(name) \ - struct srcu_struct name = __SRCU_STRUCT_INIT(name, name) + struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name) #define DEFINE_STATIC_SRCU(name) \ - static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name) + static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name, name) void synchronize_srcu(struct srcu_struct *ssp); diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 3ce6deee1dbe..276f325f1296 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -57,11 +57,17 @@ struct srcu_node { int grphi; /* Biggest CPU for node. */ }; +/* + * Per-SRCU-domain structure, update-side data linked from srcu_struct. + */ +struct srcu_usage { + struct srcu_node *node; /* Combining tree. */ +}; + /* * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - struct srcu_node *node; /* Combining tree. */ struct srcu_node *level[RCU_NUM_LVLS + 1]; /* First node at each level. */ int srcu_size_state; /* Small-to-big transition state. */ @@ -90,6 +96,7 @@ struct srcu_struct { unsigned long reschedule_count; struct delayed_work work; struct lockdep_map dep_map; + struct srcu_usage *srcu_sup; /* Update-side data. */ }; /* Values for size state variable (->srcu_size_state). */ @@ -108,24 +115,24 @@ struct srcu_struct { #define SRCU_STATE_SCAN1 1 #define SRCU_STATE_SCAN2 2 -#define __SRCU_STRUCT_INIT_COMMON(name) \ +#define __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ .srcu_gp_seq_needed = -1UL, \ .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ + .srcu_sup = &usage_name, \ __SRCU_DEP_MAP_INIT(name) -#define __SRCU_STRUCT_INIT_MODULE(name) \ +#define __SRCU_STRUCT_INIT_MODULE(name, usage_name) \ { \ - __SRCU_STRUCT_INIT_COMMON(name) \ + __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ } -#define __SRCU_STRUCT_INIT(name, pcpu_name) \ +#define __SRCU_STRUCT_INIT(name, usage_name, pcpu_name) \ { \ .sda = &pcpu_name, \ - __SRCU_STRUCT_INIT_COMMON(name) \ + __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ } - /* * Define and initialize a srcu struct at build time. * Do -not- call init_srcu_struct() nor cleanup_srcu_struct() on it. @@ -147,15 +154,17 @@ struct srcu_struct { */ #ifdef MODULE # define __DEFINE_SRCU(name, is_static) \ - is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name); \ + static struct srcu_usage name##_srcu_usage; \ + is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name, name##_srcu_usage); \ extern struct srcu_struct * const __srcu_struct_##name; \ struct srcu_struct * const __srcu_struct_##name \ __section("___srcu_struct_ptrs") = &name #else # define __DEFINE_SRCU(name, is_static) \ static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data); \ + static struct srcu_usage name##_srcu_usage; \ is_static struct srcu_struct name = \ - __SRCU_STRUCT_INIT(name, name##_srcu_data) + __SRCU_STRUCT_INIT(name, name##_srcu_usage, name##_srcu_data) #endif #define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */) #define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, static) diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index 115616ac3bfa..8d18d4bf0e29 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -341,11 +341,13 @@ extern void rcu_init_geometry(void); * specified state structure (for SRCU) or the only rcu_state structure * (for RCU). */ -#define srcu_for_each_node_breadth_first(sp, rnp) \ +#define _rcu_for_each_node_breadth_first(sp, rnp) \ for ((rnp) = &(sp)->node[0]; \ (rnp) < &(sp)->node[rcu_num_nodes]; (rnp)++) #define rcu_for_each_node_breadth_first(rnp) \ - srcu_for_each_node_breadth_first(&rcu_state, rnp) + _rcu_for_each_node_breadth_first(&rcu_state, rnp) +#define srcu_for_each_node_breadth_first(ssp, rnp) \ + _rcu_for_each_node_breadth_first(ssp->srcu_sup, rnp) /* * Scan the leaves of the rcu_node hierarchy for the rcu_state structure. diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 7e6e7dfb1a87..049e20dbec76 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -173,12 +173,12 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) /* Initialize geometry if it has not already been initialized. */ rcu_init_geometry(); - ssp->node = kcalloc(rcu_num_nodes, sizeof(*ssp->node), gfp_flags); - if (!ssp->node) + ssp->srcu_sup->node = kcalloc(rcu_num_nodes, sizeof(*ssp->srcu_sup->node), gfp_flags); + if (!ssp->srcu_sup->node) return false; /* Work out the overall tree geometry. */ - ssp->level[0] = &ssp->node[0]; + ssp->level[0] = &ssp->srcu_sup->node[0]; for (i = 1; i < rcu_num_lvls; i++) ssp->level[i] = ssp->level[i - 1] + num_rcu_lvl[i - 1]; rcu_init_levelspread(levelspread, num_rcu_lvl); @@ -195,7 +195,7 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) snp->srcu_gp_seq_needed_exp = SRCU_SNP_INIT_SEQ; snp->grplo = -1; snp->grphi = -1; - if (snp == &ssp->node[0]) { + if (snp == &ssp->srcu_sup->node[0]) { /* Root node, special case. */ snp->srcu_parent = NULL; continue; @@ -236,8 +236,12 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) */ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) { + if (!is_static) + ssp->srcu_sup = kzalloc(sizeof(*ssp->srcu_sup), GFP_KERNEL); + if (!ssp->srcu_sup) + return -ENOMEM; ssp->srcu_size_state = SRCU_SIZE_SMALL; - ssp->node = NULL; + ssp->srcu_sup->node = NULL; mutex_init(&ssp->srcu_cb_mutex); mutex_init(&ssp->srcu_gp_mutex); ssp->srcu_idx = 0; @@ -249,8 +253,11 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->sda_is_static = is_static; if (!is_static) ssp->sda = alloc_percpu(struct srcu_data); - if (!ssp->sda) + if (!ssp->sda) { + if (!is_static) + kfree(ssp->srcu_sup); return -ENOMEM; + } init_srcu_struct_data(ssp); ssp->srcu_gp_seq_needed_exp = 0; ssp->srcu_last_gp_end = ktime_get_mono_fast_ns(); @@ -259,6 +266,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) if (!ssp->sda_is_static) { free_percpu(ssp->sda); ssp->sda = NULL; + kfree(ssp->srcu_sup); return -ENOMEM; } } else { @@ -656,13 +664,15 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) rcu_seq_current(&ssp->srcu_gp_seq), ssp->srcu_gp_seq_needed); return; /* Caller forgot to stop doing call_srcu()? */ } + kfree(ssp->srcu_sup->node); + ssp->srcu_sup->node = NULL; + ssp->srcu_size_state = SRCU_SIZE_SMALL; if (!ssp->sda_is_static) { free_percpu(ssp->sda); ssp->sda = NULL; + kfree(ssp->srcu_sup); + ssp->srcu_sup = NULL; } - kfree(ssp->node); - ssp->node = NULL; - ssp->srcu_size_state = SRCU_SIZE_SMALL; } EXPORT_SYMBOL_GPL(cleanup_srcu_struct); From patchwork Thu Mar 30 22:47:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77446 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp187402vqo; Thu, 30 Mar 2023 16:03:45 -0700 (PDT) X-Google-Smtp-Source: AKy350Yn/3UaM7q5XyBuL29LaflmTNXFouEAuWlgMv76fHz6Vn4k6emoNMsXny7YzbmcM5jD9QBD X-Received: by 2002:a05:6402:10ca:b0:502:74bf:e3c4 with SMTP id p10-20020a05640210ca00b0050274bfe3c4mr1966818edu.39.1680217425208; Thu, 30 Mar 2023 16:03:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217425; cv=none; d=google.com; s=arc-20160816; b=TovcS0FvfYmJhKGYANZYk6zLx2FH+RQqdYtQaXVKJIRQHlatAFURDp7B1in5uNWiIz GHoO5WMA6+xbTPkwtOg7fmovqz4ROka81hJ6KEU5iWcUFDzzPLYFifEGIQsMMfmmjkun aS2wJBWkyQZhe8oi1dcS26k54fp7PHb28+O7gzxXZSvGvSO77ZcvRk+8F3qqiBZFL+Jy koP6SrDEuRCwaWdtUM5CqfoZzbh3Sr/6UUsfwR5qkJOh6jxgK0V9ScCwdfQx+JCpvazV qQU5yBR5aIjg3gEhdcfn27ldKQSTut+uMfUzIY6lcYPj9JyUG9ha8sLNFzdMsDAtCJOH cN0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=x3+EAZbu09CNdCzRne1DwtSqvjTOJQ8S9mn4fi6WCDs=; b=zWQqkQns6B4xJZvSgkWm272A+iFxpi2icGm4oOyxP5FJNto/Zt0wHlaE02pEBG7r1v ev1pDm7uuSid75Yc3Oezj1fj2sEr0toZpD/eaE2fkVE6CfjvNtSaYuwHUgI0lWMoA2GD C1x+dCd9w24jAQeO3oJd/KEFWFv1IaXZt1Y1AL+3iJUzRmoKsGWYFr1uPpRDKDr0nXib V9utMss8wbpSlWs2E5zGk818cYPwVTdckU7PYi1QyAQh9EwSbKlBvdOfU+cPsYkCqB1E MTTCItl9C/xE5k50ic3/bD9NuyZL/HNZvReQXjGv/4eIHMRToRFD0siQdY89rC5L3GbX Rz7Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LzS8lfva; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f8-20020a056402068800b004acb7b6a25csi771707edy.42.2023.03.30.16.03.20; Thu, 30 Mar 2023 16:03:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=LzS8lfva; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231201AbjC3WsT (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230486AbjC3Wry (ORCPT ); Thu, 30 Mar 2023 18:47:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DE4D113D5; Thu, 30 Mar 2023 15:47:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id DB456B82A62; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92B33C4339C; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=73P8AiOczQu3Mui8U57N5Li9dx6JSHAOoEru++2gFoQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LzS8lfvagKkq7ws8iWNcjYvCuVl4wyVlvG9bSUyyvcJd3vJe9bEF5ErGDn+32Jxrb t7u/DuxE+wS9YQyoUNYR6+ql6iE7LveojNtH+pvImJo8BfhEq43FrXThE8mV4vmr6X 5VThIbiNALn4GRiminbhQzzXvtGy8ICguipy/mz/nKbwsamgWsl7Nz+nuoaDTc1gcq xtE4QSrL4+C+muHcBZtQBWHCuI20jGEkpcnt+3yGWn4W25KiUW8Uu14xI3Jhe7NMq1 iTAF03cuAkHaJQ3XEn9YF4PSA/dwillcKnGFjeh2lYF0neSuKHI6oRXEmn36cRIEXQ FZ/ZMD8LgdoLA== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 0DF5D1540481; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 05/20] srcu: Move ->level from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:11 -0700 Message-Id: <20230330224726.662344-5-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835666641586066?= X-GMAIL-MSGID: =?utf-8?q?1761835666641586066?= This commit moves the ->level[] array from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 4 ++-- kernel/rcu/srcutree.c | 14 +++++++------- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 276f325f1296..c7373fe5c14b 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -62,14 +62,14 @@ struct srcu_node { */ struct srcu_usage { struct srcu_node *node; /* Combining tree. */ + struct srcu_node *level[RCU_NUM_LVLS + 1]; + /* First node at each level. */ }; /* * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - struct srcu_node *level[RCU_NUM_LVLS + 1]; - /* First node at each level. */ int srcu_size_state; /* Small-to-big transition state. */ struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ spinlock_t __private lock; /* Protect counters and size state. */ diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 049e20dbec76..acb0862faafa 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -178,9 +178,9 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) return false; /* Work out the overall tree geometry. */ - ssp->level[0] = &ssp->srcu_sup->node[0]; + ssp->srcu_sup->level[0] = &ssp->srcu_sup->node[0]; for (i = 1; i < rcu_num_lvls; i++) - ssp->level[i] = ssp->level[i - 1] + num_rcu_lvl[i - 1]; + ssp->srcu_sup->level[i] = ssp->srcu_sup->level[i - 1] + num_rcu_lvl[i - 1]; rcu_init_levelspread(levelspread, num_rcu_lvl); /* Each pass through this loop initializes one srcu_node structure. */ @@ -202,10 +202,10 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) } /* Non-root node. */ - if (snp == ssp->level[level + 1]) + if (snp == ssp->srcu_sup->level[level + 1]) level++; - snp->srcu_parent = ssp->level[level - 1] + - (snp - ssp->level[level]) / + snp->srcu_parent = ssp->srcu_sup->level[level - 1] + + (snp - ssp->srcu_sup->level[level]) / levelspread[level - 1]; } @@ -214,7 +214,7 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) * leaves of the srcu_node tree. */ level = rcu_num_lvls - 1; - snp_first = ssp->level[level]; + snp_first = ssp->srcu_sup->level[level]; for_each_possible_cpu(cpu) { sdp = per_cpu_ptr(ssp->sda, cpu); sdp->mynode = &snp_first[cpu / levelspread[level]]; @@ -889,7 +889,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) srcu_for_each_node_breadth_first(ssp, snp) { spin_lock_irq_rcu_node(snp); cbs = false; - last_lvl = snp >= ssp->level[rcu_num_lvls - 1]; + last_lvl = snp >= ssp->srcu_sup->level[rcu_num_lvls - 1]; if (last_lvl) cbs = ss_state < SRCU_SIZE_BIG || snp->srcu_have_cbs[idx] == gpseq; snp->srcu_have_cbs[idx] = gpseq; From patchwork Thu Mar 30 22:47:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77447 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp187984vqo; Thu, 30 Mar 2023 16:04:36 -0700 (PDT) X-Google-Smtp-Source: AKy350aUkS2QW/D58Ptwl0woY10k7tMa+7ORmCBtPPq7riDSNVgY4eSSZiC7zrsoK54JpUZqtJY8 X-Received: by 2002:a17:906:da02:b0:8b2:c2fc:178e with SMTP id fi2-20020a170906da0200b008b2c2fc178emr26777661ejb.74.1680217476759; Thu, 30 Mar 2023 16:04:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217476; cv=none; d=google.com; s=arc-20160816; b=ErztLf/fqfSwf07NifNV6i2lfGh4RmQ8KUTxepA2EuBmkg8CgChbnOv35xA40Y3wyB dPr64jHas85HRifCGiN/eEApEQ6UuMORDvoJFhOdSeQIjX5BZmtM1g3lZurPcjRd4u/h kFF7XwpQklWdueunj6EfYwUJNBf/mCiNW+3qJDFnAAZnDTAQBLv3+1ysA+Is4FmEBjua /UP6/i+oJmyuj7CBGzBsB3KB6h/CIxj8bZdohZXWUgPHgSOEl8f+Q2jrx89I9YPnShsF 1YwI0Y5cyplCJoW47TEbH4y/7zM8lxd3nvxpDd/vtiiYl3GuSNp9D5T+TrJIDjl/xYlG /JUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=FjfHULXRvzcdMrPOmPY9pc3+0ZTj1Yos2hrDicGU5ig=; b=PQcg9wDGhkDZEB6N/y3Ewizhgb3GSSsEunbCnZbJARAlgUKBJlwD5iRWTgNieYEB3L rVCaxl7gUxATHFyti37gjVZXoWFioFZIZ+TFu2itQurhz6l2Qcj+SuQZ2P00P2tJ0CPg +nhlxI+Q9OjABYVgBReqkhajxW5jbTHejWTMtcYyDHwuf93DFs4+oDZ8HLm8LCcPCDWI 9C5gTs8enn6EwKQam3sH6LcHyJD5k4VNfwJMWdirh5Ru057xCAatSWQq4W7GVge27lxh HtqMfzGVYSe81RdpV71auZ/7eD60t2Yfbkg2hpn/ogDy/fETxR/ezmPA5/ckeCZii5Jd csnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="qZaZr7/5"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hg13-20020a1709072ccd00b008bded90af7csi626350ejc.531.2023.03.30.16.04.12; Thu, 30 Mar 2023 16:04:36 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="qZaZr7/5"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231356AbjC3Ws0 (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230477AbjC3Wrz (ORCPT ); Thu, 30 Mar 2023 18:47:55 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 90A001207D; Thu, 30 Mar 2023 15:47:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 34497B82A61; Thu, 30 Mar 2023 22:47:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD277C433A1; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=2z52HqJMXnDAlApz+l6xDJZbiniCrafGRH1u03k4yVo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qZaZr7/5urXiVAyR2eCtKe8EXBG0RwZqaFP/pf1fsPlARUundoWN03A5SeJVVQysu sJ5CUwgs8YwPxuTodrnuCi4czaBzwTOtXiGZZWDzLzlH6IJyKQ4s9Dl/TS6eq4+npJ rf4YhmU7mhHWCgqLKdrLGuH0VYI9pZtDWGIY9mPETTzEBz4/YxOp5Es+qdW1h5A6W+ l1afZTuJW667qYqHX59mBLYYwhRAEfftOK8C9glDufS7tlqRfn5EB41MCO6uTs56B8 wUvADQkyEzDf5BrPnJ9yqJUb6e4vVw0/V+PDWyPlhNvtvvN2ka+8aKV/hOOPQnE34A PwIx608vjEAtA== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 115D81540482; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 06/20] srcu: Move ->srcu_size_state from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:12 -0700 Message-Id: <20230330224726.662344-6-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835721043381623?= X-GMAIL-MSGID: =?utf-8?q?1761835721043381623?= This commit moves the ->srcu_size_state field from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 2 +- kernel/rcu/srcutree.c | 37 +++++++++++++++++++------------------ 2 files changed, 20 insertions(+), 19 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index c7373fe5c14b..443d27a214ef 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -64,13 +64,13 @@ struct srcu_usage { struct srcu_node *node; /* Combining tree. */ struct srcu_node *level[RCU_NUM_LVLS + 1]; /* First node at each level. */ + int srcu_size_state; /* Small-to-big transition state. */ }; /* * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - int srcu_size_state; /* Small-to-big transition state. */ struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ spinlock_t __private lock; /* Protect counters and size state. */ struct mutex srcu_gp_mutex; /* Serialize GP work. */ diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index acb0862faafa..8428a184d506 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -225,7 +225,7 @@ static bool init_srcu_struct_nodes(struct srcu_struct *ssp, gfp_t gfp_flags) } sdp->grpmask = 1 << (cpu - sdp->mynode->grplo); } - smp_store_release(&ssp->srcu_size_state, SRCU_SIZE_WAIT_BARRIER); + smp_store_release(&ssp->srcu_sup->srcu_size_state, SRCU_SIZE_WAIT_BARRIER); return true; } @@ -240,7 +240,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->srcu_sup = kzalloc(sizeof(*ssp->srcu_sup), GFP_KERNEL); if (!ssp->srcu_sup) return -ENOMEM; - ssp->srcu_size_state = SRCU_SIZE_SMALL; + ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; ssp->srcu_sup->node = NULL; mutex_init(&ssp->srcu_cb_mutex); mutex_init(&ssp->srcu_gp_mutex); @@ -261,7 +261,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) init_srcu_struct_data(ssp); ssp->srcu_gp_seq_needed_exp = 0; ssp->srcu_last_gp_end = ktime_get_mono_fast_ns(); - if (READ_ONCE(ssp->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { + if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC)) { if (!ssp->sda_is_static) { free_percpu(ssp->sda); @@ -270,7 +270,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) return -ENOMEM; } } else { - WRITE_ONCE(ssp->srcu_size_state, SRCU_SIZE_BIG); + WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); } } smp_store_release(&ssp->srcu_gp_seq_needed, 0); /* Init done. */ @@ -315,7 +315,7 @@ EXPORT_SYMBOL_GPL(init_srcu_struct); static void __srcu_transition_to_big(struct srcu_struct *ssp) { lockdep_assert_held(&ACCESS_PRIVATE(ssp, lock)); - smp_store_release(&ssp->srcu_size_state, SRCU_SIZE_ALLOC); + smp_store_release(&ssp->srcu_sup->srcu_size_state, SRCU_SIZE_ALLOC); } /* @@ -326,10 +326,10 @@ static void srcu_transition_to_big(struct srcu_struct *ssp) unsigned long flags; /* Double-checked locking on ->srcu_size-state. */ - if (smp_load_acquire(&ssp->srcu_size_state) != SRCU_SIZE_SMALL) + if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) return; spin_lock_irqsave_rcu_node(ssp, flags); - if (smp_load_acquire(&ssp->srcu_size_state) != SRCU_SIZE_SMALL) { + if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) { spin_unlock_irqrestore_rcu_node(ssp, flags); return; } @@ -345,7 +345,7 @@ static void spin_lock_irqsave_check_contention(struct srcu_struct *ssp) { unsigned long j; - if (!SRCU_SIZING_IS_CONTEND() || ssp->srcu_size_state) + if (!SRCU_SIZING_IS_CONTEND() || ssp->srcu_sup->srcu_size_state) return; j = jiffies; if (ssp->srcu_size_jiffies != j) { @@ -666,7 +666,7 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) } kfree(ssp->srcu_sup->node); ssp->srcu_sup->node = NULL; - ssp->srcu_size_state = SRCU_SIZE_SMALL; + ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; if (!ssp->sda_is_static) { free_percpu(ssp->sda); ssp->sda = NULL; @@ -770,7 +770,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) struct srcu_data *sdp; int state; - if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) + if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id()); else sdp = this_cpu_ptr(ssp->sda); @@ -880,7 +880,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) /* A new grace period can start at this point. But only one. */ /* Initiate callback invocation as needed. */ - ss_state = smp_load_acquire(&ssp->srcu_size_state); + ss_state = smp_load_acquire(&ssp->srcu_sup->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_BARRIER) { srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, get_boot_cpu_id()), cbdelay); @@ -940,7 +940,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) if (ss_state == SRCU_SIZE_ALLOC) init_srcu_struct_nodes(ssp, GFP_KERNEL); else - smp_store_release(&ssp->srcu_size_state, ss_state + 1); + smp_store_release(&ssp->srcu_sup->srcu_size_state, ss_state + 1); } } @@ -1002,7 +1002,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, unsigned long snp_seq; /* Ensure that snp node tree is fully initialized before traversing it */ - if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) + if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) snp_leaf = NULL; else snp_leaf = sdp->mynode; @@ -1209,7 +1209,7 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, * sequence number cannot wrap around in the meantime. */ idx = __srcu_read_lock_nmisafe(ssp); - ss_state = smp_load_acquire(&ssp->srcu_size_state); + ss_state = smp_load_acquire(&ssp->srcu_sup->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_CALL) sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id()); else @@ -1546,7 +1546,7 @@ void srcu_barrier(struct srcu_struct *ssp) atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); idx = __srcu_read_lock_nmisafe(ssp); - if (smp_load_acquire(&ssp->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) + if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) srcu_barrier_one_cpu(ssp, per_cpu_ptr(ssp->sda, get_boot_cpu_id())); else for_each_possible_cpu(cpu) @@ -1784,7 +1784,7 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) int cpu; int idx; unsigned long s0 = 0, s1 = 0; - int ss_state = READ_ONCE(ssp->srcu_size_state); + int ss_state = READ_ONCE(ssp->srcu_sup->srcu_size_state); int ss_state_idx = ss_state; idx = ssp->srcu_idx & 0x1; @@ -1871,8 +1871,9 @@ void __init srcu_init(void) ssp = list_first_entry(&srcu_boot_list, struct srcu_struct, work.work.entry); list_del_init(&ssp->work.work.entry); - if (SRCU_SIZING_IS(SRCU_SIZING_INIT) && ssp->srcu_size_state == SRCU_SIZE_SMALL) - ssp->srcu_size_state = SRCU_SIZE_ALLOC; + if (SRCU_SIZING_IS(SRCU_SIZING_INIT) && + ssp->srcu_sup->srcu_size_state == SRCU_SIZE_SMALL) + ssp->srcu_sup->srcu_size_state = SRCU_SIZE_ALLOC; queue_work(rcu_gp_wq, &ssp->work.work); } } From patchwork Thu Mar 30 22:47:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77436 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp182022vqo; Thu, 30 Mar 2023 15:53:08 -0700 (PDT) X-Google-Smtp-Source: AKy350YsJhbyFvTHpUFrEGPZzdPBUj60uw7K8pTUlYGGdlc3Y+fDNDcq59HSAXM2TmiNvbVeUC/i X-Received: by 2002:a05:6402:31e2:b0:4fc:c6fe:1d3a with SMTP id dy2-20020a05640231e200b004fcc6fe1d3amr25093825edb.22.1680216788748; Thu, 30 Mar 2023 15:53:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680216788; cv=none; d=google.com; s=arc-20160816; b=zsdWhPCWag4qjJwfFiWyZYk4QLrGnc0MJGDDtnF+Z1SJkP5xwqB3nYPumP1OjoQwuc X3cqTLuvTL6xzBqoYzFhfXLr4i0PMX31uztzCwGSnCzvFux/e4fvQ/96H+LUy1aIXfK4 KGSLBCwrtgcPxRDMUwCkaD+4XHPNOsj40MnH78OvoPGKqbWC/V38VKXycinRRcFX+OdJ OLYOWPSM3+nPgOAA/sf0HdzhuwthB8HWbNDGknRzfpBgqscVQhs+/2QH9SJnYQVqYGLg EbMDF0B3mG3XtfGhVeQRzh3CBrDMAXcDofqLT9WhOU1DWG8Z+Ixq53Ai0pg13O93i7Y3 2qjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=RbUt4Mat5K+qCfdImjqS9eh6Gi7PhmuBt4u9LyF4NX4=; b=dSVeQuG8JFo/A2y1by55PVgUjt3K/cnUUpXKtpXuZ1Xm/oo0siHkMTeDdai6uamDH0 rfnhOP8i+I3aQ4hpfh2S5G8LVJtMgStm703KxddOxfLFZAlQgk0hbakxjW7j3MC75P7k IUVqYmQ0AM9wfernCxUi3qTJS5qPUfS9h6PFhNni+rGJclZCtc69nb7w/J9TzMxNOoYh MNGdAZz6Dg0H9Md02HnD4BLSf5502TpmYWCeX+2Jdhq1lWwCvxp+GyMAbNU2YeGUeGt5 2ksehGQIkyEOnmo9nhZz0vgd+U25ga1mxbFdCT4m/XaLaGNv6itbP8L2PgUxiJvPx0g/ 2xjA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=S8YyvD9L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c15-20020aa7df0f000000b00501d18f4652si704526edy.501.2023.03.30.15.52.44; Thu, 30 Mar 2023 15:53:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=S8YyvD9L; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230467AbjC3WsH (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231197AbjC3Wrw (ORCPT ); Thu, 30 Mar 2023 18:47:52 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DEA812054; Thu, 30 Mar 2023 15:47:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8405B621D7; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id AEDA7C433A0; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=cul4jXhNAUP7VLyJNUzboJAx7iCgCVh7yP8irDHosoE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S8YyvD9Lhp4043m6tOQEkTkh9yt2ForFJhH8L1PmqPljd8VvZznlk4So7iB1JAm0M rGt1XdiGj4t0+UScvwHWifSzmgNzUR2WPAhZ1ghGQOU+5GYO4z1HcoHKncdvfKeavX onzMAAI9Jv6gOr7T8msVuWqfZy9dxgqYw/WCaNoB/x/sycWv2QJr9MOrmj1CP2FjXY RdQTXTIG2i4MYHDNknIDlcOMaRB+K9Q+j5gReRykbi7uSEcCq3kQ+57olWtHn8w8X/ ktpMUW3rY7JFfnV7aPqp/THrXv0oIvixK8DcVrGa6PQUjEzBqfc/6mmwGb6Za2Tdo6 exxyU/uqvNlOQ== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 14C701540483; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 07/20] srcu: Move ->srcu_cb_mutex from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:13 -0700 Message-Id: <20230330224726.662344-7-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761834999375949861?= X-GMAIL-MSGID: =?utf-8?q?1761834999375949861?= This commit moves the ->srcu_cb_mutex field from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 2 +- kernel/rcu/srcutree.c | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 443d27a214ef..231de66ceb15 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -65,13 +65,13 @@ struct srcu_usage { struct srcu_node *level[RCU_NUM_LVLS + 1]; /* First node at each level. */ int srcu_size_state; /* Small-to-big transition state. */ + struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ }; /* * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ spinlock_t __private lock; /* Protect counters and size state. */ struct mutex srcu_gp_mutex; /* Serialize GP work. */ unsigned int srcu_idx; /* Current rdr array element. */ diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 8428a184d506..1814f3bfc219 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -242,7 +242,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) return -ENOMEM; ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; ssp->srcu_sup->node = NULL; - mutex_init(&ssp->srcu_cb_mutex); + mutex_init(&ssp->srcu_sup->srcu_cb_mutex); mutex_init(&ssp->srcu_gp_mutex); ssp->srcu_idx = 0; ssp->srcu_gp_seq = 0; @@ -861,7 +861,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) int ss_state; /* Prevent more than one additional grace period. */ - mutex_lock(&ssp->srcu_cb_mutex); + mutex_lock(&ssp->srcu_sup->srcu_cb_mutex); /* End the current grace period. */ spin_lock_irq_rcu_node(ssp); @@ -921,7 +921,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) } /* Callback initiation done, allow grace periods after next. */ - mutex_unlock(&ssp->srcu_cb_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_cb_mutex); /* Start a new grace period if needed. */ spin_lock_irq_rcu_node(ssp); From patchwork Thu Mar 30 22:47:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77454 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp189370vqo; Thu, 30 Mar 2023 16:06:32 -0700 (PDT) X-Google-Smtp-Source: AKy350YNKwtPbPpKPF1YYHMOPPLwmirHY/joPc8CMke6+Unu0c7oFk+/Dy3hC7dhHMw+kDtpM01O X-Received: by 2002:a17:906:395:b0:93b:6da8:539a with SMTP id b21-20020a170906039500b0093b6da8539amr24690948eja.18.1680217591923; Thu, 30 Mar 2023 16:06:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217591; cv=none; d=google.com; s=arc-20160816; b=Y94V9HDdhIzX3M4LTD996Kr7fTwvx9y+3e9wlhSBQcOgvFcKuVsmLuxf6N9KAexGgB 5ZhKKvvmwKxDS2ulVZ2RFbpFR78VEwnlESKb4ZyJNrsTxOs6gp8mpUVrBOIsKLaiAEi5 Ha4DM37Uu2bOLnyFw6l/3DttkmNxZukdLpu9xCw56t8VGwEi2vqVVpEIh7Me9ebfSAPc AzYdNZpTBDqCE11I26vEGXjsCwFt3xScRSR+cna4FwqK6I137X5OWjCVLuqq+RjCJAeh kvKqt56z7ckSQ1ck7KwrUrVKYiJ2x+6Jwn2MMA6QMGcONjD+HAyrISsUoNaR4dAGT52s rBBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NrxA+b1gWRIXqGD10ruq9ehp30FwlBnqVL27uybxwhI=; b=PT9x2JLvZJC+R350BXtTH+1TuTOjNESfywHzfMoOs+yDZjw02wad3xn65WCHSByExT S8CVgg6EsZUQ4Q/rPnsXxu9p+Lz3RIp/7a8PXYcF+UfrQkjRReU44j3eIbEgyfOQ4xjK KCipTvEqWgtAGSPoPg2FA7v0bkUO/6XQahOLLadM2tcTqhbCmMcHZuFlgKyXOo4X1xgx GjdalgE9GLKcuDMVp2IVUBGYyYHHpiFG6GKyvKx5YIQgMgHLzD1X0YRcbs9K/Yf3aqIH gk3ceScV/1Wd0o7FF96EDeD2QmD1c3VOZ+XqYAw1y2O9xC+w5MC+++UqdqvHpjOIB58f 5HJA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=l4LApNwb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k19-20020a1709062a5300b0093095c06481si700831eje.310.2023.03.30.16.05.51; Thu, 30 Mar 2023 16:06:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=l4LApNwb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231305AbjC3WsJ (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42794 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230285AbjC3Wrx (ORCPT ); Thu, 30 Mar 2023 18:47:53 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9118612065; Thu, 30 Mar 2023 15:47:32 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9B7CE621D6; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7631C433A4; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=4BhMmHXcsVssuDv8mvr9qUKP0Q+CWM9BNT8OTVm5zVk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l4LApNwbIunXUrlA8TlgsofrLdhDd7b28UtiVrQzYqv+vLoMGhh2XsLDevjs0wOzn jg745Rhq6aQCk8+fqX/Ww3p+dgFp7T7cHS7bt894CUCMX1ZVaeebZGTHWfMyppc9B5 43VZRCIc4z88ga8iIhAWo6tH2xIakLXn7TDCkvh8C0b26kWOPsDeHYQtcvgVs7Gy9I XIkxOz8sP5Is8rqKDXbOKkQ8sj/7wiOOCeBVhlNrm5sLO8Xg8W/HwDOzxLsEiFnGra MchZ//c2eOKa0vc76AehjBzTUMVV/y2VytOXEkGoZOb45OtIeydd9t5raD+O6NtIt/ mzCFzm91SNZOw== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 181311540484; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 08/20] srcu: Move ->lock initialization after srcu_usage allocation Date: Thu, 30 Mar 2023 15:47:14 -0700 Message-Id: <20230330224726.662344-8-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835841691258748?= X-GMAIL-MSGID: =?utf-8?q?1761835841691258748?= Currently, both __init_srcu_struct() in CONFIG_DEBUG_LOCK_ALLOC=y kernels and init_srcu_struct() in CONFIG_DEBUG_LOCK_ALLOC=n kernel initialize the srcu_struct structure's ->lock before the srcu_usage structure has been allocated. This of course prevents the ->lock from being moved to the srcu_usage structure, so this commit moves the initialization into the init_srcu_struct_fields() after the srcu_usage structure has been allocated. Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 1814f3bfc219..c2a024a60f1a 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -240,6 +240,8 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->srcu_sup = kzalloc(sizeof(*ssp->srcu_sup), GFP_KERNEL); if (!ssp->srcu_sup) return -ENOMEM; + if (!is_static) + spin_lock_init(&ACCESS_PRIVATE(ssp, lock)); ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; ssp->srcu_sup->node = NULL; mutex_init(&ssp->srcu_sup->srcu_cb_mutex); @@ -285,7 +287,6 @@ int __init_srcu_struct(struct srcu_struct *ssp, const char *name, /* Don't re-initialize a lock while it is held. */ debug_check_no_locks_freed((void *)ssp, sizeof(*ssp)); lockdep_init_map(&ssp->dep_map, name, key, 0); - spin_lock_init(&ACCESS_PRIVATE(ssp, lock)); return init_srcu_struct_fields(ssp, false); } EXPORT_SYMBOL_GPL(__init_srcu_struct); @@ -302,7 +303,6 @@ EXPORT_SYMBOL_GPL(__init_srcu_struct); */ int init_srcu_struct(struct srcu_struct *ssp) { - spin_lock_init(&ACCESS_PRIVATE(ssp, lock)); return init_srcu_struct_fields(ssp, false); } EXPORT_SYMBOL_GPL(init_srcu_struct); From patchwork Thu Mar 30 22:47:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77438 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp183627vqo; Thu, 30 Mar 2023 15:57:06 -0700 (PDT) X-Google-Smtp-Source: AKy350aaWZSMZnB+UgM+su5xP11fu6gzCndGj3XxGChT7gE6i7tjaP6DPLJogXdNLTkXpjUCTbFK X-Received: by 2002:a17:906:5fd9:b0:930:d17b:959b with SMTP id k25-20020a1709065fd900b00930d17b959bmr26972660ejv.22.1680217026038; Thu, 30 Mar 2023 15:57:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217026; cv=none; d=google.com; s=arc-20160816; b=rG55QNuV7mRBIwR8iIANPGMHotuvbzfAUvaV0AnDVwx9I7GME2ho/tkelDBZWHJYLh D/5tix5kX5bXk9s+EfV3bDdYelIGeML5isnF+x4UuUjH7MSAVZFcW/724Khf/UbpLhss 2sL1dxW6SKnYivxV99ALBTXGEFKKzrUo2Eia5/dHS6xy0pfSjrpk0zCggDXtBQQd67EL Yi6HehOiL0hHs0dkni0iOOloZv8EccYWLSQ876uAgb9n7nK5bV7P0DYWPYhz+vTWu7ZZ wp/8whi+ocqeZDKe5fQ5hkOOHOcJd+/PYIl/XssRUI3bhle64BVQQWegbzfu17VmCmxd EhqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=y58LNW7RSv3Nfws/IK5Ldixm6F9pwBKdgh0w90sN+Ss=; b=UqUor5fZttxcQJP/BHvHqLPfyaboGJe0V2bL8odFrxTKFrHv+ZXVH+FgQ0Hhd94h0X nY5YvJu85HBDO/SFTwyx9E5l5tscftpIqZh/SEqrLP7+eoQGn7Q2KC287uc4Sp8mrZsp guylXDewog5VbzBaizOMMGjz++thT60G0HYqkDnxeyPjS8qX3DjNpNxbdQcg6NJ2aNeJ 9e6PGG+kTHSXye1hkb5R1Q2zBL5Mvw2bGGdMi3o7wV+l7Ey0Z6nai8Y/2egB0JMlpHsJ BRZ33il5T2S5XHzi4bjmkxpfAkq0Bp7BW08X5j7NEUk4ZCejPyfV2A9b/mOt91vZLtdv gDCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=O6TuH8C9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f16-20020a170906561000b0093d02f64f1bsi81280ejq.29.2023.03.30.15.56.42; Thu, 30 Mar 2023 15:57:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=O6TuH8C9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231367AbjC3Wsd (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231209AbjC3Wr5 (ORCPT ); Thu, 30 Mar 2023 18:47:57 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFD461284C; Thu, 30 Mar 2023 15:47:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id A1512621DA; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D0900C433A7; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=cCzGQyhtTKWvulawbEaYl/H/NEMJXQmFHtWWAEfRDjw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O6TuH8C9K4ADQJpkZ2GORgcG8QsicxJsHNy3rmryYERZoNnMvYQp4PCqfz/kKRB77 p7s5gMlnxxk7beOPkSKI0JkgjtJI2FEm/WHo448m1pTYV2NHPa2GaYbvYeT43uM2/5 uGdKMd4PsMA3EyAJrTL0liArrBI72A+gEYk/Btc4IP7VH8rssxQ0Ah/L0pbFPxW04H PXS3vy+7fbxSpWDe3MGFXSvzE/mSTXKsAF2oq1QKdW6py/7Y046NYp5I6WG92RQr2z cFMu5+se63RFSe0/WKnX0jS++gAak5eF8n5UOhpF6k5yqvtTjjGyOHogU1Ju43KzXT bNyOR4uW0vz6g== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 1B8AB1540485; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 09/20] srcu: Move ->lock from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:15 -0700 Message-Id: <20230330224726.662344-9-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835248126214427?= X-GMAIL-MSGID: =?utf-8?q?1761835248126214427?= This commit moves the ->lock field from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 11 +++++--- kernel/rcu/srcutree.c | 56 ++++++++++++++++++++-------------------- 2 files changed, 35 insertions(+), 32 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 231de66ceb15..694d87b81917 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -66,13 +66,13 @@ struct srcu_usage { /* First node at each level. */ int srcu_size_state; /* Small-to-big transition state. */ struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ + spinlock_t __private lock; /* Protect counters and size state. */ }; /* * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - spinlock_t __private lock; /* Protect counters and size state. */ struct mutex srcu_gp_mutex; /* Serialize GP work. */ unsigned int srcu_idx; /* Current rdr array element. */ unsigned long srcu_gp_seq; /* Grace-period seq #. */ @@ -116,7 +116,6 @@ struct srcu_struct { #define SRCU_STATE_SCAN2 2 #define __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ - .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ .srcu_gp_seq_needed = -1UL, \ .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ .srcu_sup = &usage_name, \ @@ -154,7 +153,9 @@ struct srcu_struct { */ #ifdef MODULE # define __DEFINE_SRCU(name, is_static) \ - static struct srcu_usage name##_srcu_usage; \ + static struct srcu_usage name##_srcu_usage = { \ + .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ + }; \ is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name, name##_srcu_usage); \ extern struct srcu_struct * const __srcu_struct_##name; \ struct srcu_struct * const __srcu_struct_##name \ @@ -162,7 +163,9 @@ struct srcu_struct { #else # define __DEFINE_SRCU(name, is_static) \ static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data); \ - static struct srcu_usage name##_srcu_usage; \ + static struct srcu_usage name##_srcu_usage = { \ + .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ + }; \ is_static struct srcu_struct name = \ __SRCU_STRUCT_INIT(name, name##_srcu_usage, name##_srcu_data) #endif diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index c2a024a60f1a..c42248cf18f6 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -103,7 +103,7 @@ do { \ #define spin_trylock_irqsave_rcu_node(p, flags) \ ({ \ - bool ___locked = spin_trylock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \ + bool ___locked = spin_trylock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \ \ if (___locked) \ smp_mb__after_unlock_lock(); \ @@ -241,7 +241,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) if (!ssp->srcu_sup) return -ENOMEM; if (!is_static) - spin_lock_init(&ACCESS_PRIVATE(ssp, lock)); + spin_lock_init(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; ssp->srcu_sup->node = NULL; mutex_init(&ssp->srcu_sup->srcu_cb_mutex); @@ -314,7 +314,7 @@ EXPORT_SYMBOL_GPL(init_srcu_struct); */ static void __srcu_transition_to_big(struct srcu_struct *ssp) { - lockdep_assert_held(&ACCESS_PRIVATE(ssp, lock)); + lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); smp_store_release(&ssp->srcu_sup->srcu_size_state, SRCU_SIZE_ALLOC); } @@ -328,13 +328,13 @@ static void srcu_transition_to_big(struct srcu_struct *ssp) /* Double-checked locking on ->srcu_size-state. */ if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) return; - spin_lock_irqsave_rcu_node(ssp, flags); + spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) != SRCU_SIZE_SMALL) { - spin_unlock_irqrestore_rcu_node(ssp, flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); return; } __srcu_transition_to_big(ssp); - spin_unlock_irqrestore_rcu_node(ssp, flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); } /* @@ -369,9 +369,9 @@ static void spin_lock_irqsave_sdp_contention(struct srcu_data *sdp, unsigned lon if (spin_trylock_irqsave_rcu_node(sdp, *flags)) return; - spin_lock_irqsave_rcu_node(ssp, *flags); + spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags); spin_lock_irqsave_check_contention(ssp); - spin_unlock_irqrestore_rcu_node(ssp, *flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, *flags); spin_lock_irqsave_rcu_node(sdp, *flags); } @@ -383,9 +383,9 @@ static void spin_lock_irqsave_sdp_contention(struct srcu_data *sdp, unsigned lon */ static void spin_lock_irqsave_ssp_contention(struct srcu_struct *ssp, unsigned long *flags) { - if (spin_trylock_irqsave_rcu_node(ssp, *flags)) + if (spin_trylock_irqsave_rcu_node(ssp->srcu_sup, *flags)) return; - spin_lock_irqsave_rcu_node(ssp, *flags); + spin_lock_irqsave_rcu_node(ssp->srcu_sup, *flags); spin_lock_irqsave_check_contention(ssp); } @@ -404,13 +404,13 @@ static void check_init_srcu_struct(struct srcu_struct *ssp) /* The smp_load_acquire() pairs with the smp_store_release(). */ if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_gp_seq_needed))) /*^^^*/ return; /* Already initialized. */ - spin_lock_irqsave_rcu_node(ssp, flags); + spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); if (!rcu_seq_state(ssp->srcu_gp_seq_needed)) { - spin_unlock_irqrestore_rcu_node(ssp, flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); return; } init_srcu_struct_fields(ssp, true); - spin_unlock_irqrestore_rcu_node(ssp, flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); } /* @@ -774,7 +774,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) sdp = per_cpu_ptr(ssp->sda, get_boot_cpu_id()); else sdp = this_cpu_ptr(ssp->sda); - lockdep_assert_held(&ACCESS_PRIVATE(ssp, lock)); + lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)); spin_lock_rcu_node(sdp); /* Interrupts already disabled. */ rcu_segcblist_advance(&sdp->srcu_cblist, @@ -864,7 +864,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) mutex_lock(&ssp->srcu_sup->srcu_cb_mutex); /* End the current grace period. */ - spin_lock_irq_rcu_node(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); idx = rcu_seq_state(ssp->srcu_gp_seq); WARN_ON_ONCE(idx != SRCU_STATE_SCAN2); if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_gp_seq), READ_ONCE(ssp->srcu_gp_seq_needed_exp))) @@ -875,7 +875,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) gpseq = rcu_seq_current(&ssp->srcu_gp_seq); if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq)) WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, gpseq); - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); mutex_unlock(&ssp->srcu_gp_mutex); /* A new grace period can start at this point. But only one. */ @@ -924,15 +924,15 @@ static void srcu_gp_end(struct srcu_struct *ssp) mutex_unlock(&ssp->srcu_sup->srcu_cb_mutex); /* Start a new grace period if needed. */ - spin_lock_irq_rcu_node(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); gpseq = rcu_seq_current(&ssp->srcu_gp_seq); if (!rcu_seq_state(gpseq) && ULONG_CMP_LT(gpseq, ssp->srcu_gp_seq_needed)) { srcu_gp_start(ssp); - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); srcu_reschedule(ssp, 0); } else { - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); } /* Transition to big if needed. */ @@ -975,7 +975,7 @@ static void srcu_funnel_exp_start(struct srcu_struct *ssp, struct srcu_node *snp spin_lock_irqsave_ssp_contention(ssp, &flags); if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s)) WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, s); - spin_unlock_irqrestore_rcu_node(ssp, flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); } /* @@ -1064,7 +1064,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, else if (list_empty(&ssp->work.work.entry)) list_add(&ssp->work.work.entry, &srcu_boot_list); } - spin_unlock_irqrestore_rcu_node(ssp, flags); + spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); } /* @@ -1599,17 +1599,17 @@ static void srcu_advance_state(struct srcu_struct *ssp) */ idx = rcu_seq_state(smp_load_acquire(&ssp->srcu_gp_seq)); /* ^^^ */ if (idx == SRCU_STATE_IDLE) { - spin_lock_irq_rcu_node(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); if (ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)) { WARN_ON_ONCE(rcu_seq_state(ssp->srcu_gp_seq)); - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); mutex_unlock(&ssp->srcu_gp_mutex); return; } idx = rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)); if (idx == SRCU_STATE_IDLE) srcu_gp_start(ssp); - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); if (idx != SRCU_STATE_IDLE) { mutex_unlock(&ssp->srcu_gp_mutex); return; /* Someone else started the grace period. */ @@ -1623,10 +1623,10 @@ static void srcu_advance_state(struct srcu_struct *ssp) return; /* readers present, retry later. */ } srcu_flip(ssp); - spin_lock_irq_rcu_node(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); rcu_seq_set_state(&ssp->srcu_gp_seq, SRCU_STATE_SCAN2); ssp->srcu_n_exp_nodelay = 0; - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); } if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN2) { @@ -1710,7 +1710,7 @@ static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay) { bool pushgp = true; - spin_lock_irq_rcu_node(ssp); + spin_lock_irq_rcu_node(ssp->srcu_sup); if (ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)) { if (!WARN_ON_ONCE(rcu_seq_state(ssp->srcu_gp_seq))) { /* All requests fulfilled, time to go idle. */ @@ -1720,7 +1720,7 @@ static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay) /* Outstanding request and no GP. Start one. */ srcu_gp_start(ssp); } - spin_unlock_irq_rcu_node(ssp); + spin_unlock_irq_rcu_node(ssp->srcu_sup); if (pushgp) queue_delayed_work(rcu_gp_wq, &ssp->work, delay); From patchwork Thu Mar 30 22:47:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77450 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp188614vqo; Thu, 30 Mar 2023 16:05:26 -0700 (PDT) X-Google-Smtp-Source: AKy350YjreCd7k0PhylPSN/LBWD7hIFfFQqOZh3xBEsAQwR7TqbSloI1GWChxxpm4qLwNqdhrPNv X-Received: by 2002:aa7:de02:0:b0:4fb:e9b8:ca5a with SMTP id h2-20020aa7de02000000b004fbe9b8ca5amr23535239edv.40.1680217526675; Thu, 30 Mar 2023 16:05:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217526; cv=none; d=google.com; s=arc-20160816; b=GTypqVlwpKlGzXxQ3SrgkSBuwJDrrjelGIr7Jjl4ixPDjNR/w8kmX/PG/LvIrujNmj D6H5AQxVcr55mldofxQjFa+HVKafi4ISIW3EXgwv0zYOkCYrpV8xKHcm5iylWH8JC0QY KRC3kHZK5T4Bl/b65tLPewiicy75tHNpWRIBXdtQSyCcAQzqfhhnn1CKTqU0izFBasm+ r5diVNbCzdg8q76WzIajiFoigxuIiXRbAKv9BSfzElO0cgAnW5gjRbC84zLbPn9rKmqf iOU6p/+Xg9Ouq2fZ0MKPrkkt8SOpCrFOu9SBo9wNNT57KveB2zL8+9YdBQmETXcwk0j9 MJ/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6OwMjonrPl+9rqmLWWxJWgR8JvI1R/aSuPUkiOn214c=; b=bKZhG1gJODXH6wjsmZ8vEubJ49wK3qHWweS09A/jGGMhfn63mtGTrhZAZPmZBlJJIt t1f1uqqJyKuNBl201Z+K4Ucp8pbUdk8U+2M6Zc4/Oh+dyBfrAq6S/22l4rDR2Xutc0W9 CtQK2hhrb6kqxlx4QJgzyoJASLRxXvYo2qYksU15piCV/s+TGomwdFNkkrQhdB6ocgfM vEAaY4B+RuDtjijBKT3Eb+CuyN/MU9ebcZOOfImIrRJFVuWP+X582/cijw7xJx6/ZrmV r7BEH8gk5nutoLAsiV+L/KTMEWATJ5arF5o/u3TFye/CueyfhojY4xScl8Q663ZH1ssv kS9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aVtFclAc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id v1-20020a056402184100b004fd26753f3esi246532edy.129.2023.03.30.16.05.01; Thu, 30 Mar 2023 16:05:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=aVtFclAc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231361AbjC3Ws3 (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231135AbjC3Wr4 (ORCPT ); Thu, 30 Mar 2023 18:47:56 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED68612852; Thu, 30 Mar 2023 15:47:35 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B3D4C621D9; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9766C433AC; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=VVkUnY0uv7jk+b59+oVZrqwrjNkvSKi6rJLly8DwwsE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aVtFclAcBAfx2ZFcDv0cCYBZoUwHDWPGOtfetvXwDp0KpsoJXb+MF/ZdPX3J1l6h9 oj7FXiVK0gc76XDa0MSRwoOiAV94xxsXhi2ftUOLP7x6ZFrMu/XutqJcT7/Yw2nWX9 GbNcVAmg20z07VWuvdDpelYQjpzFVoLgKIZJiqNp4eQmavOA54g3Y10FfxzMf/HJl3 IVuMLQ2UpMZ1xlqX9/uiN0KIFTD/KI18nOGsdhxJOaw765+D3Kosn9kMbgVnC5cFY6 sZUJa5CDJEwyFXzmxySx4os/hqxCtMkRsLvo6AEWaNw98LMg6jfFh5E8aGNCNllw49 Zuu83oM/iS1vw== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 1ED4C1540486; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 10/20] srcu: Move ->srcu_gp_mutex from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:16 -0700 Message-Id: <20230330224726.662344-10-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835772967259090?= X-GMAIL-MSGID: =?utf-8?q?1761835772967259090?= This commit moves the ->srcu_gp_mutex field from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 2 +- kernel/rcu/srcutree.c | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 694d87b81917..d04e3da6181c 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -67,13 +67,13 @@ struct srcu_usage { int srcu_size_state; /* Small-to-big transition state. */ struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ spinlock_t __private lock; /* Protect counters and size state. */ + struct mutex srcu_gp_mutex; /* Serialize GP work. */ }; /* * Per-SRCU-domain structure, similar in function to rcu_state. */ struct srcu_struct { - struct mutex srcu_gp_mutex; /* Serialize GP work. */ unsigned int srcu_idx; /* Current rdr array element. */ unsigned long srcu_gp_seq; /* Grace-period seq #. */ unsigned long srcu_gp_seq_needed; /* Latest gp_seq needed. */ diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index c42248cf18f6..a36066798de7 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -245,7 +245,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; ssp->srcu_sup->node = NULL; mutex_init(&ssp->srcu_sup->srcu_cb_mutex); - mutex_init(&ssp->srcu_gp_mutex); + mutex_init(&ssp->srcu_sup->srcu_gp_mutex); ssp->srcu_idx = 0; ssp->srcu_gp_seq = 0; ssp->srcu_barrier_seq = 0; @@ -876,7 +876,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq)) WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, gpseq); spin_unlock_irq_rcu_node(ssp->srcu_sup); - mutex_unlock(&ssp->srcu_gp_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); /* A new grace period can start at this point. But only one. */ /* Initiate callback invocation as needed. */ @@ -1585,7 +1585,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) { int idx; - mutex_lock(&ssp->srcu_gp_mutex); + mutex_lock(&ssp->srcu_sup->srcu_gp_mutex); /* * Because readers might be delayed for an extended period after @@ -1603,7 +1603,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) if (ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)) { WARN_ON_ONCE(rcu_seq_state(ssp->srcu_gp_seq)); spin_unlock_irq_rcu_node(ssp->srcu_sup); - mutex_unlock(&ssp->srcu_gp_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; } idx = rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)); @@ -1611,7 +1611,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) srcu_gp_start(ssp); spin_unlock_irq_rcu_node(ssp->srcu_sup); if (idx != SRCU_STATE_IDLE) { - mutex_unlock(&ssp->srcu_gp_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; /* Someone else started the grace period. */ } } @@ -1619,7 +1619,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN1) { idx = 1 ^ (ssp->srcu_idx & 1); if (!try_check_zero(ssp, idx, 1)) { - mutex_unlock(&ssp->srcu_gp_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; /* readers present, retry later. */ } srcu_flip(ssp); @@ -1637,7 +1637,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) */ idx = 1 ^ (ssp->srcu_idx & 1); if (!try_check_zero(ssp, idx, 2)) { - mutex_unlock(&ssp->srcu_gp_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; /* readers present, retry later. */ } ssp->srcu_n_exp_nodelay = 0; From patchwork Thu Mar 30 22:47:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77459 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp198772vqo; Thu, 30 Mar 2023 16:25:25 -0700 (PDT) X-Google-Smtp-Source: AKy350bmNG44FYWLVs9+3jvJTDbbfstrUm9orzOXVaBy0aDRN5unrMuK8u2xVn/ec8EOB9ijGG+i X-Received: by 2002:a50:ec89:0:b0:4fa:4b1c:5ea3 with SMTP id e9-20020a50ec89000000b004fa4b1c5ea3mr24751717edr.23.1680218724987; Thu, 30 Mar 2023 16:25:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680218724; cv=none; d=google.com; s=arc-20160816; b=KCRHDA6M08Kf5xjw227VTtzC03QoLyZamwai9C/yBRMnlQLjY1Zwmk0b/P6Lueesqf cEsW9ga2yN3we2RWNpVD6a8Ce4nOI/TKWfhepZPxQny9O+gbOL2nvnf0ncunTl3FpQGO A++9CXEp0r4P2JXstLEigGADmD+1fDo6mfwcFhgay2bUEnQsXzWY8xtrrvm7n3KJ/nIA h6yqEVjCiHREK6oTCYxbXAIefiV9u7+/FwWZGkx1gMuauPPWvChTMfrdHXiCbQMZkNJl 8w0urN3UlOYZnCt97xsU9B44P+h23CEK+LFRDGeci8YsVahWzUaoEy5kchrYKHluJ3wk ByKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=P4vFEPHL4T4JqaQbaIeDPDrwVoNVlCUlYk8Y4Y+yR1g=; b=fl6MDFb5PqztBR8WohrVLvojycU8RJP6xmADRJCnH2R6X4eyAwKzrzjWaKsqWrnaw2 In4b1H67oCCJxPiaW/gRFkW5puEfSbBkqvFVhcvCbYTQHiCvJ9jakoWmkg9LJJz0n5h2 WOYGm3rBwM0gAJN6FNwGvmgye5k3q/So32QBZDH2tIx67OGCpZK3JB1gVbQdDyWO1sw/ 6rGUE8TnoGbcCFKdqHa+0RVKAD5R0TU9MKGx0y5kOsnFtJGER2o8+UxyedHa5MG8Q00+ SLYCeZm3jPnqes8ryflQ9kyrnjIofp/FzF4z717rG3WXYiMq2ng+uFrqLTxJBAjPaiLX ljPw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RWAqiU4Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z11-20020aa7cf8b000000b004fd1ee1380fsi800797edx.612.2023.03.30.16.24.59; Thu, 30 Mar 2023 16:25:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=RWAqiU4Z; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231429AbjC3WtH (ORCPT + 99 others); Thu, 30 Mar 2023 18:49:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231178AbjC3WsF (ORCPT ); Thu, 30 Mar 2023 18:48:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 538E411660; Thu, 30 Mar 2023 15:47:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 24D87621E3; Thu, 30 Mar 2023 22:47:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1156C433AA; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216448; bh=tZljGOSxt5h+qbZk8QZYlNumDRe8p6ORU5c3l95xxHQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RWAqiU4ZiVOj+cWfdDDDHtCJ7zC5PSXc5KGBe+Q9wcAaOdY3xveUC6gSX6SUjCZi1 jiw7qca+YwzFCqZD01ZCEMcqDaKE6LcuNCAHJEv+MOaws4Ou7h739G5RTtVHx9ZJMd fvrpM5fi4xo1XD6mgZY6PSnWHfbG3Jp2i7bL1y7oeNTHEmzMCEVHAV09s6GeomxWXb 86SsZ2FIGNAeDFIWPD2YiBIbzVih9zfYaUmhAk/g0WkF34nZy8ldZFIeG4vIwA4vs9 bo+6tlfm4l2Z9EjiANuac9lxSbG5EsNubmnuluv07rm4CJ5g8LZ4hLze4xEsOU8AOa 7LggWMLfxSxgA== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 22A2F1540487; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 11/20] srcu: Move grace-period fields from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:17 -0700 Message-Id: <20230330224726.662344-11-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761837029901497883?= X-GMAIL-MSGID: =?utf-8?q?1761837029901497883?= This commit moves the ->srcu_gp_seq, ->srcu_gp_seq_needed, ->srcu_gp_seq_needed_exp, ->srcu_gp_start, and ->srcu_last_gp_end fields from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney Tested-by: Jon Hunter Acked-by: Zqiang --- include/linux/srcutree.h | 25 ++++---- kernel/rcu/srcutree.c | 128 +++++++++++++++++++-------------------- 2 files changed, 77 insertions(+), 76 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index d04e3da6181c..372e35b0e8b6 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -68,6 +68,11 @@ struct srcu_usage { struct mutex srcu_cb_mutex; /* Serialize CB preparation. */ spinlock_t __private lock; /* Protect counters and size state. */ struct mutex srcu_gp_mutex; /* Serialize GP work. */ + unsigned long srcu_gp_seq; /* Grace-period seq #. */ + unsigned long srcu_gp_seq_needed; /* Latest gp_seq needed. */ + unsigned long srcu_gp_seq_needed_exp; /* Furthest future exp GP. */ + unsigned long srcu_gp_start; /* Last GP start timestamp (jiffies) */ + unsigned long srcu_last_gp_end; /* Last GP end timestamp (ns) */ }; /* @@ -75,11 +80,6 @@ struct srcu_usage { */ struct srcu_struct { unsigned int srcu_idx; /* Current rdr array element. */ - unsigned long srcu_gp_seq; /* Grace-period seq #. */ - unsigned long srcu_gp_seq_needed; /* Latest gp_seq needed. */ - unsigned long srcu_gp_seq_needed_exp; /* Furthest future exp GP. */ - unsigned long srcu_gp_start; /* Last GP start timestamp (jiffies) */ - unsigned long srcu_last_gp_end; /* Last GP end timestamp (ns) */ unsigned long srcu_size_jiffies; /* Current contention-measurement interval. */ unsigned long srcu_n_lock_retries; /* Contention events in current interval. */ unsigned long srcu_n_exp_nodelay; /* # expedited no-delays in current GP phase. */ @@ -115,8 +115,13 @@ struct srcu_struct { #define SRCU_STATE_SCAN1 1 #define SRCU_STATE_SCAN2 2 -#define __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ +#define __SRCU_USAGE_INIT(name) \ +{ \ + .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ .srcu_gp_seq_needed = -1UL, \ +} + +#define __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ .srcu_sup = &usage_name, \ __SRCU_DEP_MAP_INIT(name) @@ -153,9 +158,7 @@ struct srcu_struct { */ #ifdef MODULE # define __DEFINE_SRCU(name, is_static) \ - static struct srcu_usage name##_srcu_usage = { \ - .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ - }; \ + static struct srcu_usage name##_srcu_usage = __SRCU_USAGE_INIT(name##_srcu_usage); \ is_static struct srcu_struct name = __SRCU_STRUCT_INIT_MODULE(name, name##_srcu_usage); \ extern struct srcu_struct * const __srcu_struct_##name; \ struct srcu_struct * const __srcu_struct_##name \ @@ -163,9 +166,7 @@ struct srcu_struct { #else # define __DEFINE_SRCU(name, is_static) \ static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data); \ - static struct srcu_usage name##_srcu_usage = { \ - .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ - }; \ + static struct srcu_usage name##_srcu_usage = __SRCU_USAGE_INIT(name##_srcu_usage); \ is_static struct srcu_struct name = \ __SRCU_STRUCT_INIT(name, name##_srcu_usage, name##_srcu_data) #endif diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index a36066798de7..340eb685cf64 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -135,8 +135,8 @@ static void init_srcu_struct_data(struct srcu_struct *ssp) spin_lock_init(&ACCESS_PRIVATE(sdp, lock)); rcu_segcblist_init(&sdp->srcu_cblist); sdp->srcu_cblist_invoking = false; - sdp->srcu_gp_seq_needed = ssp->srcu_gp_seq; - sdp->srcu_gp_seq_needed_exp = ssp->srcu_gp_seq; + sdp->srcu_gp_seq_needed = ssp->srcu_sup->srcu_gp_seq; + sdp->srcu_gp_seq_needed_exp = ssp->srcu_sup->srcu_gp_seq; sdp->mynode = NULL; sdp->cpu = cpu; INIT_WORK(&sdp->work, srcu_invoke_callbacks); @@ -247,7 +247,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) mutex_init(&ssp->srcu_sup->srcu_cb_mutex); mutex_init(&ssp->srcu_sup->srcu_gp_mutex); ssp->srcu_idx = 0; - ssp->srcu_gp_seq = 0; + ssp->srcu_sup->srcu_gp_seq = 0; ssp->srcu_barrier_seq = 0; mutex_init(&ssp->srcu_barrier_mutex); atomic_set(&ssp->srcu_barrier_cpu_cnt, 0); @@ -261,8 +261,8 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) return -ENOMEM; } init_srcu_struct_data(ssp); - ssp->srcu_gp_seq_needed_exp = 0; - ssp->srcu_last_gp_end = ktime_get_mono_fast_ns(); + ssp->srcu_sup->srcu_gp_seq_needed_exp = 0; + ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns(); if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC)) { if (!ssp->sda_is_static) { @@ -275,7 +275,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); } } - smp_store_release(&ssp->srcu_gp_seq_needed, 0); /* Init done. */ + smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, 0); /* Init done. */ return 0; } @@ -402,10 +402,10 @@ static void check_init_srcu_struct(struct srcu_struct *ssp) unsigned long flags; /* The smp_load_acquire() pairs with the smp_store_release(). */ - if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_gp_seq_needed))) /*^^^*/ + if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq_needed))) /*^^^*/ return; /* Already initialized. */ spin_lock_irqsave_rcu_node(ssp->srcu_sup, flags); - if (!rcu_seq_state(ssp->srcu_gp_seq_needed)) { + if (!rcu_seq_state(ssp->srcu_sup->srcu_gp_seq_needed)) { spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); return; } @@ -616,11 +616,11 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) unsigned long j; unsigned long jbase = SRCU_INTERVAL; - if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_gp_seq), READ_ONCE(ssp->srcu_gp_seq_needed_exp))) + if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_sup->srcu_gp_seq), READ_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp))) jbase = 0; - if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq))) { + if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq))) { j = jiffies - 1; - gpstart = READ_ONCE(ssp->srcu_gp_start); + gpstart = READ_ONCE(ssp->srcu_sup->srcu_gp_start); if (time_after(j, gpstart)) jbase += j - gpstart; if (!jbase) { @@ -656,12 +656,12 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) if (WARN_ON(rcu_segcblist_n_cbs(&sdp->srcu_cblist))) return; /* Forgot srcu_barrier(), so just leak it! */ } - if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) != SRCU_STATE_IDLE) || - WARN_ON(rcu_seq_current(&ssp->srcu_gp_seq) != ssp->srcu_gp_seq_needed) || + if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) != SRCU_STATE_IDLE) || + WARN_ON(rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq) != ssp->srcu_sup->srcu_gp_seq_needed) || WARN_ON(srcu_readers_active(ssp))) { pr_info("%s: Active srcu_struct %p read state: %d gp state: %lu/%lu\n", - __func__, ssp, rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)), - rcu_seq_current(&ssp->srcu_gp_seq), ssp->srcu_gp_seq_needed); + __func__, ssp, rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)), + rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq), ssp->srcu_sup->srcu_gp_seq_needed); return; /* Caller forgot to stop doing call_srcu()? */ } kfree(ssp->srcu_sup->node); @@ -775,18 +775,18 @@ static void srcu_gp_start(struct srcu_struct *ssp) else sdp = this_cpu_ptr(ssp->sda); lockdep_assert_held(&ACCESS_PRIVATE(ssp->srcu_sup, lock)); - WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)); + WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)); spin_lock_rcu_node(sdp); /* Interrupts already disabled. */ rcu_segcblist_advance(&sdp->srcu_cblist, - rcu_seq_current(&ssp->srcu_gp_seq)); + rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, - rcu_seq_snap(&ssp->srcu_gp_seq)); + rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ - WRITE_ONCE(ssp->srcu_gp_start, jiffies); + WRITE_ONCE(ssp->srcu_sup->srcu_gp_start, jiffies); WRITE_ONCE(ssp->srcu_n_exp_nodelay, 0); smp_mb(); /* Order prior store to ->srcu_gp_seq_needed vs. GP start. */ - rcu_seq_start(&ssp->srcu_gp_seq); - state = rcu_seq_state(ssp->srcu_gp_seq); + rcu_seq_start(&ssp->srcu_sup->srcu_gp_seq); + state = rcu_seq_state(ssp->srcu_sup->srcu_gp_seq); WARN_ON_ONCE(state != SRCU_STATE_SCAN1); } @@ -865,16 +865,16 @@ static void srcu_gp_end(struct srcu_struct *ssp) /* End the current grace period. */ spin_lock_irq_rcu_node(ssp->srcu_sup); - idx = rcu_seq_state(ssp->srcu_gp_seq); + idx = rcu_seq_state(ssp->srcu_sup->srcu_gp_seq); WARN_ON_ONCE(idx != SRCU_STATE_SCAN2); - if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_gp_seq), READ_ONCE(ssp->srcu_gp_seq_needed_exp))) + if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_sup->srcu_gp_seq), READ_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp))) cbdelay = 0; - WRITE_ONCE(ssp->srcu_last_gp_end, ktime_get_mono_fast_ns()); - rcu_seq_end(&ssp->srcu_gp_seq); - gpseq = rcu_seq_current(&ssp->srcu_gp_seq); - if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, gpseq)) - WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, gpseq); + WRITE_ONCE(ssp->srcu_sup->srcu_last_gp_end, ktime_get_mono_fast_ns()); + rcu_seq_end(&ssp->srcu_sup->srcu_gp_seq); + gpseq = rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq); + if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, gpseq)) + WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, gpseq); spin_unlock_irq_rcu_node(ssp->srcu_sup); mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); /* A new grace period can start at this point. But only one. */ @@ -925,9 +925,9 @@ static void srcu_gp_end(struct srcu_struct *ssp) /* Start a new grace period if needed. */ spin_lock_irq_rcu_node(ssp->srcu_sup); - gpseq = rcu_seq_current(&ssp->srcu_gp_seq); + gpseq = rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq); if (!rcu_seq_state(gpseq) && - ULONG_CMP_LT(gpseq, ssp->srcu_gp_seq_needed)) { + ULONG_CMP_LT(gpseq, ssp->srcu_sup->srcu_gp_seq_needed)) { srcu_gp_start(ssp); spin_unlock_irq_rcu_node(ssp->srcu_sup); srcu_reschedule(ssp, 0); @@ -960,7 +960,7 @@ static void srcu_funnel_exp_start(struct srcu_struct *ssp, struct srcu_node *snp if (snp) for (; snp != NULL; snp = snp->srcu_parent) { sgsne = READ_ONCE(snp->srcu_gp_seq_needed_exp); - if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_gp_seq, s)) || + if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) || (!srcu_invl_snp_seq(sgsne) && ULONG_CMP_GE(sgsne, s))) return; spin_lock_irqsave_rcu_node(snp, flags); @@ -973,8 +973,8 @@ static void srcu_funnel_exp_start(struct srcu_struct *ssp, struct srcu_node *snp spin_unlock_irqrestore_rcu_node(snp, flags); } spin_lock_irqsave_ssp_contention(ssp, &flags); - if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s)) - WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, s); + if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, s)) + WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, s); spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); } @@ -1010,7 +1010,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, if (snp_leaf) /* Each pass through the loop does one level of the srcu_node tree. */ for (snp = snp_leaf; snp != NULL; snp = snp->srcu_parent) { - if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_gp_seq, s)) && snp != snp_leaf) + if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) && snp != snp_leaf) return; /* GP already done and CBs recorded. */ spin_lock_irqsave_rcu_node(snp, flags); snp_seq = snp->srcu_have_cbs[idx]; @@ -1037,20 +1037,20 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, /* Top of tree, must ensure the grace period will be started. */ spin_lock_irqsave_ssp_contention(ssp, &flags); - if (ULONG_CMP_LT(ssp->srcu_gp_seq_needed, s)) { + if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed, s)) { /* * Record need for grace period s. Pair with load * acquire setting up for initialization. */ - smp_store_release(&ssp->srcu_gp_seq_needed, s); /*^^^*/ + smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, s); /*^^^*/ } - if (!do_norm && ULONG_CMP_LT(ssp->srcu_gp_seq_needed_exp, s)) - WRITE_ONCE(ssp->srcu_gp_seq_needed_exp, s); + if (!do_norm && ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, s)) + WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, s); /* If grace period not already in progress, start it. */ - if (!WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_gp_seq, s)) && - rcu_seq_state(ssp->srcu_gp_seq) == SRCU_STATE_IDLE) { - WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)); + if (!WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) && + rcu_seq_state(ssp->srcu_sup->srcu_gp_seq) == SRCU_STATE_IDLE) { + WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)); srcu_gp_start(ssp); // And how can that list_add() in the "else" clause @@ -1164,18 +1164,18 @@ static bool srcu_might_be_idle(struct srcu_struct *ssp) /* First, see if enough time has passed since the last GP. */ t = ktime_get_mono_fast_ns(); - tlast = READ_ONCE(ssp->srcu_last_gp_end); + tlast = READ_ONCE(ssp->srcu_sup->srcu_last_gp_end); if (exp_holdoff == 0 || time_in_range_open(t, tlast, tlast + exp_holdoff)) return false; /* Too soon after last GP. */ /* Next, check for probable idleness. */ - curseq = rcu_seq_current(&ssp->srcu_gp_seq); + curseq = rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq); smp_mb(); /* Order ->srcu_gp_seq with ->srcu_gp_seq_needed. */ - if (ULONG_CMP_LT(curseq, READ_ONCE(ssp->srcu_gp_seq_needed))) + if (ULONG_CMP_LT(curseq, READ_ONCE(ssp->srcu_sup->srcu_gp_seq_needed))) return false; /* Grace period in progress, so not idle. */ smp_mb(); /* Order ->srcu_gp_seq with prior access. */ - if (curseq != rcu_seq_current(&ssp->srcu_gp_seq)) + if (curseq != rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)) return false; /* GP # changed, so not idle. */ return true; /* With reasonable probability, idle! */ } @@ -1218,8 +1218,8 @@ static unsigned long srcu_gp_start_if_needed(struct srcu_struct *ssp, if (rhp) rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp); rcu_segcblist_advance(&sdp->srcu_cblist, - rcu_seq_current(&ssp->srcu_gp_seq)); - s = rcu_seq_snap(&ssp->srcu_gp_seq); + rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); + s = rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq); (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, s); if (ULONG_CMP_LT(sdp->srcu_gp_seq_needed, s)) { sdp->srcu_gp_seq_needed = s; @@ -1430,7 +1430,7 @@ unsigned long get_state_synchronize_srcu(struct srcu_struct *ssp) // Any prior manipulation of SRCU-protected data must happen // before the load from ->srcu_gp_seq. smp_mb(); - return rcu_seq_snap(&ssp->srcu_gp_seq); + return rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq); } EXPORT_SYMBOL_GPL(get_state_synchronize_srcu); @@ -1477,7 +1477,7 @@ EXPORT_SYMBOL_GPL(start_poll_synchronize_srcu); */ bool poll_state_synchronize_srcu(struct srcu_struct *ssp, unsigned long cookie) { - if (!rcu_seq_done(&ssp->srcu_gp_seq, cookie)) + if (!rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, cookie)) return false; // Ensure that the end of the SRCU grace period happens before // any subsequent code that the caller might execute. @@ -1597,16 +1597,16 @@ static void srcu_advance_state(struct srcu_struct *ssp) * The load-acquire ensures that we see the accesses performed * by the prior grace period. */ - idx = rcu_seq_state(smp_load_acquire(&ssp->srcu_gp_seq)); /* ^^^ */ + idx = rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq)); /* ^^^ */ if (idx == SRCU_STATE_IDLE) { spin_lock_irq_rcu_node(ssp->srcu_sup); - if (ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)) { - WARN_ON_ONCE(rcu_seq_state(ssp->srcu_gp_seq)); + if (ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)) { + WARN_ON_ONCE(rcu_seq_state(ssp->srcu_sup->srcu_gp_seq)); spin_unlock_irq_rcu_node(ssp->srcu_sup); mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; } - idx = rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)); + idx = rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)); if (idx == SRCU_STATE_IDLE) srcu_gp_start(ssp); spin_unlock_irq_rcu_node(ssp->srcu_sup); @@ -1616,7 +1616,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) } } - if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN1) { + if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) == SRCU_STATE_SCAN1) { idx = 1 ^ (ssp->srcu_idx & 1); if (!try_check_zero(ssp, idx, 1)) { mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); @@ -1624,12 +1624,12 @@ static void srcu_advance_state(struct srcu_struct *ssp) } srcu_flip(ssp); spin_lock_irq_rcu_node(ssp->srcu_sup); - rcu_seq_set_state(&ssp->srcu_gp_seq, SRCU_STATE_SCAN2); + rcu_seq_set_state(&ssp->srcu_sup->srcu_gp_seq, SRCU_STATE_SCAN2); ssp->srcu_n_exp_nodelay = 0; spin_unlock_irq_rcu_node(ssp->srcu_sup); } - if (rcu_seq_state(READ_ONCE(ssp->srcu_gp_seq)) == SRCU_STATE_SCAN2) { + if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) == SRCU_STATE_SCAN2) { /* * SRCU read-side critical sections are normally short, @@ -1666,7 +1666,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) rcu_cblist_init(&ready_cbs); spin_lock_irq_rcu_node(sdp); rcu_segcblist_advance(&sdp->srcu_cblist, - rcu_seq_current(&ssp->srcu_gp_seq)); + rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq)); if (sdp->srcu_cblist_invoking || !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) { spin_unlock_irq_rcu_node(sdp); @@ -1694,7 +1694,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) spin_lock_irq_rcu_node(sdp); rcu_segcblist_add_len(&sdp->srcu_cblist, -len); (void)rcu_segcblist_accelerate(&sdp->srcu_cblist, - rcu_seq_snap(&ssp->srcu_gp_seq)); + rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); sdp->srcu_cblist_invoking = false; more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist); spin_unlock_irq_rcu_node(sdp); @@ -1711,12 +1711,12 @@ static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay) bool pushgp = true; spin_lock_irq_rcu_node(ssp->srcu_sup); - if (ULONG_CMP_GE(ssp->srcu_gp_seq, ssp->srcu_gp_seq_needed)) { - if (!WARN_ON_ONCE(rcu_seq_state(ssp->srcu_gp_seq))) { + if (ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)) { + if (!WARN_ON_ONCE(rcu_seq_state(ssp->srcu_sup->srcu_gp_seq))) { /* All requests fulfilled, time to go idle. */ pushgp = false; } - } else if (!rcu_seq_state(ssp->srcu_gp_seq)) { + } else if (!rcu_seq_state(ssp->srcu_sup->srcu_gp_seq)) { /* Outstanding request and no GP. Start one. */ srcu_gp_start(ssp); } @@ -1762,7 +1762,7 @@ void srcutorture_get_gp_data(enum rcutorture_type test_type, if (test_type != SRCU_FLAVOR) return; *flags = 0; - *gp_seq = rcu_seq_current(&ssp->srcu_gp_seq); + *gp_seq = rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq); } EXPORT_SYMBOL_GPL(srcutorture_get_gp_data); @@ -1791,7 +1791,7 @@ void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf) if (ss_state < 0 || ss_state >= ARRAY_SIZE(srcu_size_state_name)) ss_state_idx = ARRAY_SIZE(srcu_size_state_name) - 1; pr_alert("%s%s Tree SRCU g%ld state %d (%s)", - tt, tf, rcu_seq_current(&ssp->srcu_gp_seq), ss_state, + tt, tf, rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq), ss_state, srcu_size_state_name[ss_state_idx]); if (!ssp->sda) { // Called after cleanup_srcu_struct(), perhaps. @@ -1905,7 +1905,7 @@ static void srcu_module_going(struct module *mod) for (i = 0; i < mod->num_srcu_structs; i++) { ssp = *(sspp++); - if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_gp_seq_needed)) && + if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq_needed)) && !WARN_ON_ONCE(!ssp->sda_is_static)) cleanup_srcu_struct(ssp); free_percpu(ssp->sda); From patchwork Thu Mar 30 22:47:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77448 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp188546vqo; Thu, 30 Mar 2023 16:05:21 -0700 (PDT) X-Google-Smtp-Source: AKy350YkO7h7baWCowESMR/Vs3D29yeULp3kAQr3FVbpgbMtYZild0ZXU2u/jvNGB26qravZDIoA X-Received: by 2002:a17:906:8392:b0:92c:5f1:8288 with SMTP id p18-20020a170906839200b0092c05f18288mr26055102ejx.13.1680217521359; Thu, 30 Mar 2023 16:05:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217521; cv=none; d=google.com; s=arc-20160816; b=utHGWhd4IS6SEb9VC7e8bc5fONUjPDQdUvx35GYOwczCrhEE3aY1ZPWTdThBXcFMSY l1xa3lz6skifsK/Rw7Z2tTn2FfR4AZ26DJGzsIw9iwaL31VPMCtJkoeikuPCEECJ5jq3 OXsNiAdGwlQCw1g2ygJbKfqGq4jJ+YFWytJ8uMlj9Tizc1k+Ns3pR7cxB2/Ap8m4c740 pmi3d8gz1A4ZjrjVdHxkahje2RHsecL1X5Kq+mPFcWG6Z+QKhuBXoWG7i6Yxt5w+Wrr2 lNw/xGcFiTyAlhFJYy+M8loADTxoxbPakZkfOt4qYgxEvieOBEdz2+wT9Vx1e2uxdqb+ fS5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5adBzBQBE2BCYNMzRhMrcIMa5F8ZKnlt9igo0g1ZLUQ=; b=R4IqceZNR22gcetae5qpRIHxrahxP7wnrs12Minq7r0SqwK+uEHJz3f9RawZfLf/97 1C4KtJMZjVDw0KfwYjI8FupHQ3n8ukO1ICAKNPyDxgfkYqumNMr39K/0ZFzXh1y3/Yg5 bOupHO9CJVvoDfivSc3w536BCFKpTSuMMeJ8KWDXABy/MMgM2KSyz6ZtbHseQI3zsF23 Nrub2LjIKqd0LrzGvQekZA1trQ77Q9fvvNG9ADGZfe9mQTJ6oj/0gsCyMiQlY2IdupMU DdidzAGEtgMtLp2JoZPXQeydnZaf3KLjpe2f/KLAQUkWtM5RG7mlYR3c0U0NESEsjMH0 q02A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=liqSHJog; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dt6-20020a170906b78600b00947410648ddsi556994ejb.172.2023.03.30.16.04.56; Thu, 30 Mar 2023 16:05:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=liqSHJog; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231147AbjC3WsX (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44062 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230509AbjC3Wry (ORCPT ); Thu, 30 Mar 2023 18:47:54 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC28B12844; Thu, 30 Mar 2023 15:47:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id E7F66B82A68; Thu, 30 Mar 2023 22:47:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 02FC0C433AE; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=RWTsWggyNWMF3mrQsYn6FXy7a1TnamrnSi7nELkO3xw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=liqSHJog9QoVs7xd3BFMUdHHVHx8BYWVROWhnqYnaYzrJhoQYEYQe/w8XUc0xbdqz nskHVEeOKyVKsG4Yx7gHjUv6vnzqDkWF9RfpAuzQdo7gD1ItSnqcr3LqGeRZTjbNG9 b9bhyIGe6ZxGbSZWHNjkpX6vBGCyYmTaQ7fP8a8ja6N8kCPHwEGohfMZ9oOpFw63JA WRsK1Q4mxV+ThmVG354IAj+0WmLHgc+3FpHkQlW/kuVuUgWWLfVaCokreZ8niwxj3R x2gRgVhiEef2SVVrOyCCX9tHRGEbKppXQSnCZvkXYqYIwOx51ouf/dSe96yedzPNq3 Y9BgQyAxnWexg== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 261AD1540488; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 12/20] srcu: Move heuristics fields from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:18 -0700 Message-Id: <20230330224726.662344-12-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835767460327262?= X-GMAIL-MSGID: =?utf-8?q?1761835767460327262?= This commit moves the ->srcu_size_jiffies, ->srcu_n_lock_retries, and ->srcu_n_exp_nodelay fields from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 6 +++--- kernel/rcu/srcutree.c | 18 +++++++++--------- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 372e35b0e8b6..3023492d8d89 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -73,6 +73,9 @@ struct srcu_usage { unsigned long srcu_gp_seq_needed_exp; /* Furthest future exp GP. */ unsigned long srcu_gp_start; /* Last GP start timestamp (jiffies) */ unsigned long srcu_last_gp_end; /* Last GP end timestamp (ns) */ + unsigned long srcu_size_jiffies; /* Current contention-measurement interval. */ + unsigned long srcu_n_lock_retries; /* Contention events in current interval. */ + unsigned long srcu_n_exp_nodelay; /* # expedited no-delays in current GP phase. */ }; /* @@ -80,9 +83,6 @@ struct srcu_usage { */ struct srcu_struct { unsigned int srcu_idx; /* Current rdr array element. */ - unsigned long srcu_size_jiffies; /* Current contention-measurement interval. */ - unsigned long srcu_n_lock_retries; /* Contention events in current interval. */ - unsigned long srcu_n_exp_nodelay; /* # expedited no-delays in current GP phase. */ struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */ bool sda_is_static; /* May ->sda be passed to free_percpu()? */ unsigned long srcu_barrier_seq; /* srcu_barrier seq #. */ diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 340eb685cf64..291fb520bce0 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -348,11 +348,11 @@ static void spin_lock_irqsave_check_contention(struct srcu_struct *ssp) if (!SRCU_SIZING_IS_CONTEND() || ssp->srcu_sup->srcu_size_state) return; j = jiffies; - if (ssp->srcu_size_jiffies != j) { - ssp->srcu_size_jiffies = j; - ssp->srcu_n_lock_retries = 0; + if (ssp->srcu_sup->srcu_size_jiffies != j) { + ssp->srcu_sup->srcu_size_jiffies = j; + ssp->srcu_sup->srcu_n_lock_retries = 0; } - if (++ssp->srcu_n_lock_retries <= small_contention_lim) + if (++ssp->srcu_sup->srcu_n_lock_retries <= small_contention_lim) return; __srcu_transition_to_big(ssp); } @@ -624,8 +624,8 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) if (time_after(j, gpstart)) jbase += j - gpstart; if (!jbase) { - WRITE_ONCE(ssp->srcu_n_exp_nodelay, READ_ONCE(ssp->srcu_n_exp_nodelay) + 1); - if (READ_ONCE(ssp->srcu_n_exp_nodelay) > srcu_max_nodelay_phase) + WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, READ_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay) + 1); + if (READ_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay) > srcu_max_nodelay_phase) jbase = 1; } } @@ -783,7 +783,7 @@ static void srcu_gp_start(struct srcu_struct *ssp) rcu_seq_snap(&ssp->srcu_sup->srcu_gp_seq)); spin_unlock_rcu_node(sdp); /* Interrupts remain disabled. */ WRITE_ONCE(ssp->srcu_sup->srcu_gp_start, jiffies); - WRITE_ONCE(ssp->srcu_n_exp_nodelay, 0); + WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, 0); smp_mb(); /* Order prior store to ->srcu_gp_seq_needed vs. GP start. */ rcu_seq_start(&ssp->srcu_sup->srcu_gp_seq); state = rcu_seq_state(ssp->srcu_sup->srcu_gp_seq); @@ -1625,7 +1625,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) srcu_flip(ssp); spin_lock_irq_rcu_node(ssp->srcu_sup); rcu_seq_set_state(&ssp->srcu_sup->srcu_gp_seq, SRCU_STATE_SCAN2); - ssp->srcu_n_exp_nodelay = 0; + ssp->srcu_sup->srcu_n_exp_nodelay = 0; spin_unlock_irq_rcu_node(ssp->srcu_sup); } @@ -1640,7 +1640,7 @@ static void srcu_advance_state(struct srcu_struct *ssp) mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); return; /* readers present, retry later. */ } - ssp->srcu_n_exp_nodelay = 0; + ssp->srcu_sup->srcu_n_exp_nodelay = 0; srcu_gp_end(ssp); /* Releases ->srcu_gp_mutex. */ } } From patchwork Thu Mar 30 22:47:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77453 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp189173vqo; Thu, 30 Mar 2023 16:06:14 -0700 (PDT) X-Google-Smtp-Source: AKy350aEEyG/LDovMmjuIpuIf6EePY0hfUn0NWtAOeqHjmS+czni3gWAaCQfWGm6HieeVMQiOmbh X-Received: by 2002:a17:906:ce26:b0:932:dac6:3e46 with SMTP id sd6-20020a170906ce2600b00932dac63e46mr29362220ejb.7.1680217574171; Thu, 30 Mar 2023 16:06:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217574; cv=none; d=google.com; s=arc-20160816; b=1IDTgEOdj0ZVKRSJGycTZrwvUj90W5DnJdFFe0zRQD3aSALWO2SGl+jzYZPBzA3Fgt MDLd5cY9S9bjLLFaHyeJ3OSCMTjBBDD80plE8fd0VbuULC+CyyvQlDGzaYqdT7F9aM0F QrHY7FxaPpDeMhVmTC4VWWF4KSk8vir5hqKWUc+E2V3Drp6WywOFdE90WZuklawZJCyJ 6yAhlP5UyOvXai27bhOfQSclVvu0zTrYLZiXeagtIwIKcp3jGSDqHacSEcW/Zsjf4uTG vT45i09Bvz3RA71Vk0feHdSZ2ZU2i/zWOkz+K2cUMczcTBeVOdPamMJSZqBWvNeC0K0o 1POg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=GC/zQgich2TitPt/iHgGwZt2OAKpkP6z3NJ03AqpGUA=; b=Q1KkU1C1M5W9KuqoO2Iff+MSj5KHD5yJVJgiWNZN5woLgFApbms6is3efBAXzvyqDh MoFpoyoGgTWJHC1u97XQBcOwjCAn3a3DnQAO+f05hdaf8k3H0mwFKeOkDtlvSJY1fsim lJT56wHA4DQXbllskHwQKiCIuhwcZTakiCJ6aKQKiWv7gPERb78H2zBLbCokbRj2onVX J50M9tZ4cmBLwNxua3LaPU11ywcqqKyHUP5zA6+v+S4hEM3zHbNSxPjgzRBkOk5U9V6j IIqkrRxcQiT4thuhcS61BwWCa6OtQRU2hBtsGHOkL9iWhkJCRsJQLK51Vs9H9BuuVDwI 90Bg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QiPP9szv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dt6-20020a170906b78600b00947410648ddsi556994ejb.172.2023.03.30.16.05.49; Thu, 30 Mar 2023 16:06:14 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QiPP9szv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231386AbjC3Wsv (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231167AbjC3WsA (ORCPT ); Thu, 30 Mar 2023 18:48:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F20CEF98; Thu, 30 Mar 2023 15:47:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id C9CFE621DB; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED774C433A8; Thu, 30 Mar 2023 22:47:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=5eWA5TBt7nCii3UqGea8IcoYwbNXDY/A3q2oGxbuJqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QiPP9szvCwqkWw6bhMpgZof5y1SPwjqtt8T8lKqr3ydXoKBaltYkBWAgutxHKRyQt qGRM9Vu6CQhsN1SyK7bW+cwA8t0RfGMBltDlG2dvlPAO/m7I+ye6vJH7hhlcT32ehx xZ6g2qINOOcY9R0n5BLhPzqiA7w6XuOvzrY31TBE6zoM0+DW2AGYU37XASHMKnXwvX uGCRT82FmythqecLMwg7NqlN96srrytgeb0mv0QRVxm4yEXDLfTfuWYO/1QA/Xz9QB fKckl5S/VjFva8tcax4usiTYQDqz3ueAhG9NJbb+fo/rFtbsK3t6ECvdnuqQUwlDYL u96eVqhg6cLXg== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 298AF1540489; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 13/20] srcu: Move ->sda_is_static from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:19 -0700 Message-Id: <20230330224726.662344-13-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835823042970784?= X-GMAIL-MSGID: =?utf-8?q?1761835823042970784?= This commit moves the ->sda_is_static field from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 2 +- kernel/rcu/srcutree.c | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index 3023492d8d89..d3534ecb806e 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -76,6 +76,7 @@ struct srcu_usage { unsigned long srcu_size_jiffies; /* Current contention-measurement interval. */ unsigned long srcu_n_lock_retries; /* Contention events in current interval. */ unsigned long srcu_n_exp_nodelay; /* # expedited no-delays in current GP phase. */ + bool sda_is_static; /* May ->sda be passed to free_percpu()? */ }; /* @@ -84,7 +85,6 @@ struct srcu_usage { struct srcu_struct { unsigned int srcu_idx; /* Current rdr array element. */ struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */ - bool sda_is_static; /* May ->sda be passed to free_percpu()? */ unsigned long srcu_barrier_seq; /* srcu_barrier seq #. */ struct mutex srcu_barrier_mutex; /* Serialize barrier ops. */ struct completion srcu_barrier_completion; diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 291fb520bce0..20f2373f7e25 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -252,7 +252,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) mutex_init(&ssp->srcu_barrier_mutex); atomic_set(&ssp->srcu_barrier_cpu_cnt, 0); INIT_DELAYED_WORK(&ssp->work, process_srcu); - ssp->sda_is_static = is_static; + ssp->srcu_sup->sda_is_static = is_static; if (!is_static) ssp->sda = alloc_percpu(struct srcu_data); if (!ssp->sda) { @@ -265,7 +265,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->srcu_sup->srcu_last_gp_end = ktime_get_mono_fast_ns(); if (READ_ONCE(ssp->srcu_sup->srcu_size_state) == SRCU_SIZE_SMALL && SRCU_SIZING_IS_INIT()) { if (!init_srcu_struct_nodes(ssp, GFP_ATOMIC)) { - if (!ssp->sda_is_static) { + if (!ssp->srcu_sup->sda_is_static) { free_percpu(ssp->sda); ssp->sda = NULL; kfree(ssp->srcu_sup); @@ -667,7 +667,7 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) kfree(ssp->srcu_sup->node); ssp->srcu_sup->node = NULL; ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; - if (!ssp->sda_is_static) { + if (!ssp->srcu_sup->sda_is_static) { free_percpu(ssp->sda); ssp->sda = NULL; kfree(ssp->srcu_sup); @@ -1906,7 +1906,7 @@ static void srcu_module_going(struct module *mod) for (i = 0; i < mod->num_srcu_structs; i++) { ssp = *(sspp++); if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq_needed)) && - !WARN_ON_ONCE(!ssp->sda_is_static)) + !WARN_ON_ONCE(!ssp->srcu_sup->sda_is_static)) cleanup_srcu_struct(ssp); free_percpu(ssp->sda); } From patchwork Thu Mar 30 22:47:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77439 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp183632vqo; Thu, 30 Mar 2023 15:57:07 -0700 (PDT) X-Google-Smtp-Source: AKy350YaDY5K4se1HgqI6bdQCwfuuW8CE1GNYjoc7COswkdKFgT9/geyPL1cxxq/nhsTrG6g2ySd X-Received: by 2002:a17:906:524a:b0:931:7709:4c80 with SMTP id y10-20020a170906524a00b0093177094c80mr24339415ejm.71.1680217026860; Thu, 30 Mar 2023 15:57:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217026; cv=none; d=google.com; s=arc-20160816; b=jD5G7MwGzD4OWJmgVuzZLRT2Ib20h5NRyk5cZPl+FE0oul6lRlpW0zzdjw/cWLyoqY tjub6ip48kE0UCTWUX82j8W6ArV3M4pgMCKyblaTFBmFhRhja+v/kaDO4c4IyjUZdnzb lZy5Kg78xbK3M3ehdyg7wxgIbovzkaka5sfK+5QRGEhB7SMAtfNg0l2Z0c6YiMRQtQyY Kuto5OJsGcis38rDO4tDodMn6jV3hySIejoUGyZ04WRpjA1ZVnvCTYe4LHpZTq7Ju647 Wagu2V134qnCaQRk0wE8L40i8u8jWzDtm2EY/0L9+M5RzTVF4OHcNZC3CBqCMGrnKThB JyxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0ucEqvMT/pkdFXZb8h6PbMkchgNXcbjEYiUjTugvdWI=; b=dL+xmzo2025nn7+yfgsqucuVIHV2unlhoARpktu6LV/7zYhbvCyzpbiFR2N9Yh0jLA gKdTF9HZucm8T0v37fDViqjW9wgHtmfNgDVIGIiHTKg2sdPb88C6YEwEQ12bisgH+yiA CwzozsDWAuHGsvHjjAjofpiV2NFufsR994jQYJnxp2Xe4DX7hDNWPNLBMtJaQJch1ayv jDs3oj7NZ9FPf7MMzp4DpRbQqzkyKzapx6kPzidmL27p5kuHIbt9fE4inSljrUX/uTxi h3AwEfWGgAbJNbeOkxxsDU2EGqTokVd4Kbt1muwf0hnaiQLp36RuO3l83a+fDtsE8il0 u8rA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=FxJ1PThB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p11-20020a170906b20b00b0093defbd6296si508493ejz.1053.2023.03.30.15.56.43; Thu, 30 Mar 2023 15:57:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=FxJ1PThB; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231411AbjC3Ws5 (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44872 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231233AbjC3WsB (ORCPT ); Thu, 30 Mar 2023 18:48:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71B9611144; Thu, 30 Mar 2023 15:47:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DDA0E621DE; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0B3ABC433B0; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=qPJUbLDwJoxuPvlXSd44/Cow3IQ05DhaplSKVjgn+ys=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FxJ1PThBMsE0pdh3ZjhAIlFPzhLY5Pf40/q8k9/9SEIov6bwNqZNeqGiIFUqgEMal bUXheB5FTeuj9fiuuBYRxTN90nxayRrzQxwjvuHDg0qc9di0arbH212UV9GFyoHcj6 1yQmY5ILQ3BZp4motYghbSuNMpn/uB1h/gbVTnB/jUQsg3O7qTee+dh/ukaWp7QaEL Fcc05qIgAtkt2tw/ughERM5dmtMyMgJhud/Y137n5UWtyOuQyRNVBAfEvxYcd3S6i/ 48FhHtMcBAr/UliTEwtOxjgvR2iRrMF7ac1GI9y61hi/i6i8UZvPx9KMCVAUguwd1A GsyE18DGzBbeQ== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 2CDBD154048A; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 14/20] srcu: Move srcu_barrier() fields from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:20 -0700 Message-Id: <20230330224726.662344-14-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835249334223155?= X-GMAIL-MSGID: =?utf-8?q?1761835249334223155?= This commit moves the ->srcu_barrier_seq, ->srcu_barrier_mutex, ->srcu_barrier_completion, and ->srcu_barrier_cpu_cnt fields from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 14 +++++++------- kernel/rcu/srcutree.c | 38 +++++++++++++++++++------------------- 2 files changed, 26 insertions(+), 26 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index d3534ecb806e..d544ec1c0c8e 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -77,6 +77,13 @@ struct srcu_usage { unsigned long srcu_n_lock_retries; /* Contention events in current interval. */ unsigned long srcu_n_exp_nodelay; /* # expedited no-delays in current GP phase. */ bool sda_is_static; /* May ->sda be passed to free_percpu()? */ + unsigned long srcu_barrier_seq; /* srcu_barrier seq #. */ + struct mutex srcu_barrier_mutex; /* Serialize barrier ops. */ + struct completion srcu_barrier_completion; + /* Awaken barrier rq at end. */ + atomic_t srcu_barrier_cpu_cnt; /* # CPUs not yet posting a */ + /* callback for the barrier */ + /* operation. */ }; /* @@ -85,13 +92,6 @@ struct srcu_usage { struct srcu_struct { unsigned int srcu_idx; /* Current rdr array element. */ struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */ - unsigned long srcu_barrier_seq; /* srcu_barrier seq #. */ - struct mutex srcu_barrier_mutex; /* Serialize barrier ops. */ - struct completion srcu_barrier_completion; - /* Awaken barrier rq at end. */ - atomic_t srcu_barrier_cpu_cnt; /* # CPUs not yet posting a */ - /* callback for the barrier */ - /* operation. */ unsigned long reschedule_jiffies; unsigned long reschedule_count; struct delayed_work work; diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 20f2373f7e25..97d1fe9a160c 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -248,9 +248,9 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) mutex_init(&ssp->srcu_sup->srcu_gp_mutex); ssp->srcu_idx = 0; ssp->srcu_sup->srcu_gp_seq = 0; - ssp->srcu_barrier_seq = 0; - mutex_init(&ssp->srcu_barrier_mutex); - atomic_set(&ssp->srcu_barrier_cpu_cnt, 0); + ssp->srcu_sup->srcu_barrier_seq = 0; + mutex_init(&ssp->srcu_sup->srcu_barrier_mutex); + atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0); INIT_DELAYED_WORK(&ssp->work, process_srcu); ssp->srcu_sup->sda_is_static = is_static; if (!is_static) @@ -1496,8 +1496,8 @@ static void srcu_barrier_cb(struct rcu_head *rhp) sdp = container_of(rhp, struct srcu_data, srcu_barrier_head); ssp = sdp->ssp; - if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt)) - complete(&ssp->srcu_barrier_completion); + if (atomic_dec_and_test(&ssp->srcu_sup->srcu_barrier_cpu_cnt)) + complete(&ssp->srcu_sup->srcu_barrier_completion); } /* @@ -1511,13 +1511,13 @@ static void srcu_barrier_cb(struct rcu_head *rhp) static void srcu_barrier_one_cpu(struct srcu_struct *ssp, struct srcu_data *sdp) { spin_lock_irq_rcu_node(sdp); - atomic_inc(&ssp->srcu_barrier_cpu_cnt); + atomic_inc(&ssp->srcu_sup->srcu_barrier_cpu_cnt); sdp->srcu_barrier_head.func = srcu_barrier_cb; debug_rcu_head_queue(&sdp->srcu_barrier_head); if (!rcu_segcblist_entrain(&sdp->srcu_cblist, &sdp->srcu_barrier_head)) { debug_rcu_head_unqueue(&sdp->srcu_barrier_head); - atomic_dec(&ssp->srcu_barrier_cpu_cnt); + atomic_dec(&ssp->srcu_sup->srcu_barrier_cpu_cnt); } spin_unlock_irq_rcu_node(sdp); } @@ -1530,20 +1530,20 @@ void srcu_barrier(struct srcu_struct *ssp) { int cpu; int idx; - unsigned long s = rcu_seq_snap(&ssp->srcu_barrier_seq); + unsigned long s = rcu_seq_snap(&ssp->srcu_sup->srcu_barrier_seq); check_init_srcu_struct(ssp); - mutex_lock(&ssp->srcu_barrier_mutex); - if (rcu_seq_done(&ssp->srcu_barrier_seq, s)) { + mutex_lock(&ssp->srcu_sup->srcu_barrier_mutex); + if (rcu_seq_done(&ssp->srcu_sup->srcu_barrier_seq, s)) { smp_mb(); /* Force ordering following return. */ - mutex_unlock(&ssp->srcu_barrier_mutex); + mutex_unlock(&ssp->srcu_sup->srcu_barrier_mutex); return; /* Someone else did our work for us. */ } - rcu_seq_start(&ssp->srcu_barrier_seq); - init_completion(&ssp->srcu_barrier_completion); + rcu_seq_start(&ssp->srcu_sup->srcu_barrier_seq); + init_completion(&ssp->srcu_sup->srcu_barrier_completion); /* Initial count prevents reaching zero until all CBs are posted. */ - atomic_set(&ssp->srcu_barrier_cpu_cnt, 1); + atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 1); idx = __srcu_read_lock_nmisafe(ssp); if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) @@ -1554,12 +1554,12 @@ void srcu_barrier(struct srcu_struct *ssp) __srcu_read_unlock_nmisafe(ssp, idx); /* Remove the initial count, at which point reaching zero can happen. */ - if (atomic_dec_and_test(&ssp->srcu_barrier_cpu_cnt)) - complete(&ssp->srcu_barrier_completion); - wait_for_completion(&ssp->srcu_barrier_completion); + if (atomic_dec_and_test(&ssp->srcu_sup->srcu_barrier_cpu_cnt)) + complete(&ssp->srcu_sup->srcu_barrier_completion); + wait_for_completion(&ssp->srcu_sup->srcu_barrier_completion); - rcu_seq_end(&ssp->srcu_barrier_seq); - mutex_unlock(&ssp->srcu_barrier_mutex); + rcu_seq_end(&ssp->srcu_sup->srcu_barrier_seq); + mutex_unlock(&ssp->srcu_sup->srcu_barrier_mutex); } EXPORT_SYMBOL_GPL(srcu_barrier); From patchwork Thu Mar 30 22:47:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77451 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp188848vqo; Thu, 30 Mar 2023 16:05:44 -0700 (PDT) X-Google-Smtp-Source: AKy350Z8dqH+VS+MLcJhnqDKgoau1AKTST6qH7QqC/sE7OtlY/vHjJO9rsZ5ZQwHQJ5mEHOU8NS8 X-Received: by 2002:a17:907:10cc:b0:932:2874:cc5 with SMTP id rv12-20020a17090710cc00b0093228740cc5mr23937245ejb.16.1680217544411; Thu, 30 Mar 2023 16:05:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217544; cv=none; d=google.com; s=arc-20160816; b=dzgGqiRvUc7c29PEO5EjUsqQQoLNTOai94KvsB3UGwpHO+xoAdwuqdvesVDchm7lRQ 3/WZRjSTH8e/o7CjY3IcVTr14U1NnKOO/2uoezgOcc0tNrDMJDJgXO+QH5DjbFpbTXm8 +TvTtL4tbW0nZYHuBntlgC6hJtn/h4tvU1+/4vltH/JtxlVX1zW14JYFo5y370ZPAkBI KSeUG1Z13szyQEIvAJo+KH+Uk9Tgm1tvK3aSHg8XfU59cRoRwey1/cVxO43s2MwQ3AmP +bNFYL7bItf7NnPxapOVUMRIp9sbFEKAQU1Y+7eE5dRL0fkJsePzoqeGAbe1nC2/u+2Y USlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=TEHCvD5HNUZQMcwzZEZkiWtumuxDoAEzjC7pyie/gW4=; b=dd7H8cO4AIPZJwTpCp+SS6YiUpliWcnRg2VILuJeYMdA/Fpr6fFBZvVEisVlk4r/Zx dIbcrAG5WoVXgDuH4n65BMfMzwwkYtTUu+WsjD2cPnuqJG87LyYVEYFtzM0GAsP9dh08 tR8JsCn5aZkiGaWtNlSrOZDmmFudvCVR9qa0fTjl0yodk3jC9o9CsaRU9P344lyrUknN SuRgCvTd6v4psXc52hyKNnJ/V+6Rpn9O495G2NN51rHxSDrz7sKG03gt5fBiqrnW8ron acSIAafuTDc9szQPW/NfgmoNCa2exnPav2G834yuzY9JzuZNIyt4KzmqPenkN2LfjIp2 Hf5w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Omd3bkbS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f16-20020a170906561000b0093d02f64f1bsi81280ejq.29.2023.03.30.16.05.20; Thu, 30 Mar 2023 16:05:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Omd3bkbS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231281AbjC3WtA (ORCPT + 99 others); Thu, 30 Mar 2023 18:49:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231263AbjC3WsE (ORCPT ); Thu, 30 Mar 2023 18:48:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5397D11661; Thu, 30 Mar 2023 15:47:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5377E621E5; Thu, 30 Mar 2023 22:47:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12691C433AF; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=MD+awJifXgTYI+jyEqGzeuNPTY3Ajor2DFf2Vk7QvuA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Omd3bkbSSQ85EDKlRcNrmSv7Gx9EyIH2tcVwgP74/U9olZ1in1sOuCwEtCfCpYt1b ZFVMt9s/heinnyS+k+GuhV58THC/cL2wUqqYWHXt0mk9uudAoYhazEh8iFl4K5/mXZ qEsxbV93TvpuXea1VqyJQ4sjZIxZoixdq2QJ/LNhGx+OebiI/gk+l1CvIiyY4Bc1zE 7hHdyzQNO+GF4LFDHauLq4q5hRbnogswkAZbDdGl6LPKxPwxDHWqGmUSEZcMVGZ5/y 3WOIIFdhLM0hgZQbBGlR1s44TUkIkZD+upDAuTKNySxi6X4yv71Z/kAF2WVB9D8i14 1GFYoCidkqATw== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 30B06154048B; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 15/20] srcu: Move work-scheduling fields from srcu_struct to srcu_usage Date: Thu, 30 Mar 2023 15:47:21 -0700 Message-Id: <20230330224726.662344-15-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835792146602594?= X-GMAIL-MSGID: =?utf-8?q?1761835792146602594?= This commit moves the ->reschedule_jiffies, ->reschedule_count, and ->work fields from the srcu_struct structure to the srcu_usage structure to reduce the size of the former in order to improve cache locality. However, this means that the container_of() calls cannot get a pointer to the srcu_struct because they are no longer in the srcu_struct. This issue is addressed by adding a ->srcu_ssp field in the srcu_usage structure that references the corresponding srcu_struct structure. And given the presence of the sup pointer to the srcu_usage structure, replace some ssp->srcu_usage-> instances with sup->. [ paulmck Apply feedback from kernel test robot. ] Link: https://lore.kernel.org/oe-kbuild-all/202303191400.iO5BOqka-lkp@intel.com/ Suggested-by: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- include/linux/srcutree.h | 9 +++++---- kernel/rcu/srcutree.c | 41 +++++++++++++++++++++------------------- 2 files changed, 27 insertions(+), 23 deletions(-) diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h index d544ec1c0c8e..cd0cdd8142c5 100644 --- a/include/linux/srcutree.h +++ b/include/linux/srcutree.h @@ -84,6 +84,10 @@ struct srcu_usage { atomic_t srcu_barrier_cpu_cnt; /* # CPUs not yet posting a */ /* callback for the barrier */ /* operation. */ + unsigned long reschedule_jiffies; + unsigned long reschedule_count; + struct delayed_work work; + struct srcu_struct *srcu_ssp; }; /* @@ -92,9 +96,6 @@ struct srcu_usage { struct srcu_struct { unsigned int srcu_idx; /* Current rdr array element. */ struct srcu_data __percpu *sda; /* Per-CPU srcu_data array. */ - unsigned long reschedule_jiffies; - unsigned long reschedule_count; - struct delayed_work work; struct lockdep_map dep_map; struct srcu_usage *srcu_sup; /* Update-side data. */ }; @@ -119,10 +120,10 @@ struct srcu_struct { { \ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \ .srcu_gp_seq_needed = -1UL, \ + .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ } #define __SRCU_STRUCT_INIT_COMMON(name, usage_name) \ - .work = __DELAYED_WORK_INITIALIZER(name.work, NULL, 0), \ .srcu_sup = &usage_name, \ __SRCU_DEP_MAP_INIT(name) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 97d1fe9a160c..169a6513b739 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -251,7 +251,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) ssp->srcu_sup->srcu_barrier_seq = 0; mutex_init(&ssp->srcu_sup->srcu_barrier_mutex); atomic_set(&ssp->srcu_sup->srcu_barrier_cpu_cnt, 0); - INIT_DELAYED_WORK(&ssp->work, process_srcu); + INIT_DELAYED_WORK(&ssp->srcu_sup->work, process_srcu); ssp->srcu_sup->sda_is_static = is_static; if (!is_static) ssp->sda = alloc_percpu(struct srcu_data); @@ -275,6 +275,7 @@ static int init_srcu_struct_fields(struct srcu_struct *ssp, bool is_static) WRITE_ONCE(ssp->srcu_sup->srcu_size_state, SRCU_SIZE_BIG); } } + ssp->srcu_sup->srcu_ssp = ssp; smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, 0); /* Init done. */ return 0; } @@ -647,7 +648,7 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) return; /* Just leak it! */ if (WARN_ON(srcu_readers_active(ssp))) return; /* Just leak it! */ - flush_delayed_work(&ssp->work); + flush_delayed_work(&ssp->srcu_sup->work); for_each_possible_cpu(cpu) { struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); @@ -1059,10 +1060,10 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, // can only be executed during early boot when there is only // the one boot CPU running with interrupts still disabled. if (likely(srcu_init_done)) - queue_delayed_work(rcu_gp_wq, &ssp->work, + queue_delayed_work(rcu_gp_wq, &ssp->srcu_sup->work, !!srcu_get_delay(ssp)); - else if (list_empty(&ssp->work.work.entry)) - list_add(&ssp->work.work.entry, &srcu_boot_list); + else if (list_empty(&ssp->srcu_sup->work.work.entry)) + list_add(&ssp->srcu_sup->work.work.entry, &srcu_boot_list); } spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); } @@ -1723,7 +1724,7 @@ static void srcu_reschedule(struct srcu_struct *ssp, unsigned long delay) spin_unlock_irq_rcu_node(ssp->srcu_sup); if (pushgp) - queue_delayed_work(rcu_gp_wq, &ssp->work, delay); + queue_delayed_work(rcu_gp_wq, &ssp->srcu_sup->work, delay); } /* @@ -1734,22 +1735,24 @@ static void process_srcu(struct work_struct *work) unsigned long curdelay; unsigned long j; struct srcu_struct *ssp; + struct srcu_usage *sup; - ssp = container_of(work, struct srcu_struct, work.work); + sup = container_of(work, struct srcu_usage, work.work); + ssp = sup->srcu_ssp; srcu_advance_state(ssp); curdelay = srcu_get_delay(ssp); if (curdelay) { - WRITE_ONCE(ssp->reschedule_count, 0); + WRITE_ONCE(sup->reschedule_count, 0); } else { j = jiffies; - if (READ_ONCE(ssp->reschedule_jiffies) == j) { - WRITE_ONCE(ssp->reschedule_count, READ_ONCE(ssp->reschedule_count) + 1); - if (READ_ONCE(ssp->reschedule_count) > srcu_max_nodelay) + if (READ_ONCE(sup->reschedule_jiffies) == j) { + WRITE_ONCE(sup->reschedule_count, READ_ONCE(sup->reschedule_count) + 1); + if (READ_ONCE(sup->reschedule_count) > srcu_max_nodelay) curdelay = 1; } else { - WRITE_ONCE(ssp->reschedule_count, 1); - WRITE_ONCE(ssp->reschedule_jiffies, j); + WRITE_ONCE(sup->reschedule_count, 1); + WRITE_ONCE(sup->reschedule_jiffies, j); } } srcu_reschedule(ssp, curdelay); @@ -1848,7 +1851,7 @@ early_initcall(srcu_bootup_announce); void __init srcu_init(void) { - struct srcu_struct *ssp; + struct srcu_usage *sup; /* Decide on srcu_struct-size strategy. */ if (SRCU_SIZING_IS(SRCU_SIZING_AUTO)) { @@ -1868,13 +1871,13 @@ void __init srcu_init(void) */ srcu_init_done = true; while (!list_empty(&srcu_boot_list)) { - ssp = list_first_entry(&srcu_boot_list, struct srcu_struct, + sup = list_first_entry(&srcu_boot_list, struct srcu_usage, work.work.entry); - list_del_init(&ssp->work.work.entry); + list_del_init(&sup->work.work.entry); if (SRCU_SIZING_IS(SRCU_SIZING_INIT) && - ssp->srcu_sup->srcu_size_state == SRCU_SIZE_SMALL) - ssp->srcu_sup->srcu_size_state = SRCU_SIZE_ALLOC; - queue_work(rcu_gp_wq, &ssp->work.work); + sup->srcu_size_state == SRCU_SIZE_SMALL) + sup->srcu_size_state = SRCU_SIZE_ALLOC; + queue_work(rcu_gp_wq, &sup->work.work); } } From patchwork Thu Mar 30 22:47:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77440 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp183713vqo; Thu, 30 Mar 2023 15:57:21 -0700 (PDT) X-Google-Smtp-Source: AKy350aNHrK0SEo3QqWizgGfXeu5ay2eJ1AfF+f+oqMtWOUUkDr4XdiQBmcF3hFl2WO802Hb4Wpn X-Received: by 2002:a17:906:2db2:b0:92b:f0d9:15f3 with SMTP id g18-20020a1709062db200b0092bf0d915f3mr7077876eji.37.1680217041142; Thu, 30 Mar 2023 15:57:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217041; cv=none; d=google.com; s=arc-20160816; b=He38XrZkaHYFNSu7bwqzTD73/hi4HXm81mN6HOJCkjiJvuXjZ0B18BvcJiXMMNAnQO 67eTh6wn0asEK1RULsJWH9K3tegn9TVFlgcwuZo2E6mQFSkZvpP1l2N9qiv12+uKr6t7 BEBXEv2yzHfFX9EDw8pvb6L68w5lsbqFd5nHvGqyBvSC3HcugOtz819BRJZF+0ajTliG jw10+N2peNEfghV1ZcDBESNtw7je26YoVpHZtmLrFy50bQ0vDxhMbz3nLCyxp3lgJPTj HV4HOnHE4THFpPkw3RQfoVydCkIPFVNF0+L85vpvGp1ZXmkQwmywhRPzDVy5dMGHLjVQ mHig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8bQ9WxqF5s58FLD5Vv/wB/IbwrpwdIql74olkVQibiY=; b=r2kRRHacfW5QWRBMe2Tj+oWYajEy3tR7umsaeSNyCoupUxOawigQ/qN2JKPLs5k2Fo B5lmL04kUe4ySTEVmv5krjo5hjLxX36ZYlvc3sUto8rKrKbbecZm873dMfDcQRK0o7en HoHVJyOCqxZ2TT2rr7QSevxidWFSIaMRCJgeGhDAJoAnCB0QKts8gU/847H+wwNSDWFK nAPIJVrxmieE5rZoZnMkTo0f+CvxK3UJ8rLYGRk/X3UieZTTnsoh2Y+csJuKHohD7yo2 Rj5ywC/ZET6hdvWkFNOwr9Ew2x/elMZ7rQQyYgKHR1xMp/YP4vsrCK3794N1B2OOGbjH TyrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GKyEm7Gc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p10-20020a170906498a00b0093330e50322si728017eju.284.2023.03.30.15.56.57; Thu, 30 Mar 2023 15:57:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=GKyEm7Gc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231377AbjC3Wsn (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231225AbjC3WsA (ORCPT ); Thu, 30 Mar 2023 18:48:00 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79F781116F; Thu, 30 Mar 2023 15:47:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E29D6621E0; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19758C433B4; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=UnyqkVnuslXzekz9+1Gncw3m0zj5nMMiSQbwc+YIH6s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GKyEm7GcvRV/RG7Bioq1PMNWrDOMyvhK0n7tauJgoXIsVnAZ+oV4+tjrGIhQxcoSI hKM/Ani7w0WOHCiNaxeri9ChHGMUZ+3tuVBU97RkYgvs/SpO+KXuBMaUrGuK3isvCM KxxgJtIC9FPg40jG7M3dCFSpnfiDQQC6SarC52acH/SZkAkz9gHvDBibWn3WohLn47 DG6gRVYvVT51mDhLFjJ3dRfkXVqFH698lgSxyUeLdeKWr9TLDnO1P2CH/FWPtTzPvs 85QbyvOWl7zpK9ok9yNnnA/Q58ZviGk2u5cmzSLC8Q7ygKxvAgVCFv0hqv5ip3Nu4F 5QE9Zu976Kl1w== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 33FD8154048C; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" Subject: [PATCH rcu 16/20] srcu: Check for readers at module-exit time Date: Thu, 30 Mar 2023 15:47:22 -0700 Message-Id: <20230330224726.662344-16-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835264341937833?= X-GMAIL-MSGID: =?utf-8?q?1761835264341937833?= If a given statically allocated in-module srcu_struct structure was ever used for updates, srcu_module_going() will invoke cleanup_srcu_struct() at module-exit time. This will check for the error case of SRCU readers persisting past module-exit time. On the other hand, if this srcu_struct structure never went through a grace period, srcu_module_going() only invokes free_percpu(), which would result in strange failures if SRCU readers persisted past module-exit time. This commit therefore adds a srcu_readers_active() check to srcu_module_going(), splatting if readers have persisted and refraining from invoking free_percpu() in that case. Better to leak memory than to suffer silent memory corruption! [ paulmck: Apply Zhang, Qiang1 feedback on memory leak. ] Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 169a6513b739..f9dd6ed5503e 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1911,7 +1911,8 @@ static void srcu_module_going(struct module *mod) if (!rcu_seq_state(smp_load_acquire(&ssp->srcu_sup->srcu_gp_seq_needed)) && !WARN_ON_ONCE(!ssp->srcu_sup->sda_is_static)) cleanup_srcu_struct(ssp); - free_percpu(ssp->sda); + if (!WARN_ON(srcu_readers_active(ssp))) + free_percpu(ssp->sda); } } From patchwork Thu Mar 30 22:47:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77455 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp189671vqo; Thu, 30 Mar 2023 16:06:59 -0700 (PDT) X-Google-Smtp-Source: AK7set8ZmoQpwvz+yZaGtw001xfcdPEnDVPU/WT2+rumnOO2LWIV2XRQuPC0e32iFSM48cbtzzOd X-Received: by 2002:a05:6a20:7b29:b0:db:36f0:a5c2 with SMTP id s41-20020a056a207b2900b000db36f0a5c2mr17985044pzh.59.1680217619126; Thu, 30 Mar 2023 16:06:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217619; cv=none; d=google.com; s=arc-20160816; b=PSzelKbz9XmqKM7yqUBaOz+0tAKSv67qvT4OdX5YyqMeBzyu5Tp5OTbaEikDJcH0Og hmsvlfaox+GMHhxw4DlAh1xvr2LhgdX3cox3nvYg+uQSMiTwbug56vjQE8Rk7ck11J2D zCEp5/GUtjOme4DLwVv7AqYMV9/9w5JTrT6v9w1HUhqvfOswM+3NQzBtnkxvSFI/9x3h qbljgenvPzCDWyi66a9ZOLwcRzZE+MU1lMP4oW0cRlzzg3E1wxK+uEH+zAB4yQsBesg0 lYoN81gSydxHHLgogOdJ3qaoPfYKuq3WG8MSaayZlVoB55DHSSu/PojBw7bhSrMOksTW 4cog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=S6E8+wgy6c+2zdn9r2h1L8Wf7k1ITNCeeqE6/eYCxf8=; b=FgwH4mzE21lXs1ESvBPZhp8qKiVZegSBjM0X53twRkNQDQKolnE14AuAIkFbqG+MUO 5JnOplZk4PiwvvDdWCX76ZP1Li5Dmfkjep7D0LjHM5x/ZEsNvPkDJC4GSPGwY9wXM10U QYcHI56fnqn3yhiV32MKlbi+ejHj6BFhjKpSDzh03DwtGOhMRiefnjOt0oejn8cUFtkk yR/mysfScujuf7FSQGM18FfILqVei/ommqf3RjlDmCqkDofoof2IHX/Z1h+pUatlQ5ni 95XoGJN0Uubgm8CPnK+jqYCiR5xOlumOJy2JBInYaUJAegwUSf0apJKmPwU15xa7fHBq 5Ttg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kGpLiTL7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 31-20020a630e5f000000b00507766aea63si691372pgo.864.2023.03.30.16.06.46; Thu, 30 Mar 2023 16:06:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=kGpLiTL7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231451AbjC3WtQ (ORCPT + 99 others); Thu, 30 Mar 2023 18:49:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45860 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231303AbjC3WsJ (ORCPT ); Thu, 30 Mar 2023 18:48:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4EE4810D; Thu, 30 Mar 2023 15:47:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 0A93CB82A64; Thu, 30 Mar 2023 22:47:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2721AC4339B; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=b6bjB25MmPt0K7AqRbN3VyoVhq1Q+MBc4F0mj39Eq+Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kGpLiTL7Ra+oH3XkAYQirDufFcvhcz8FZTPmIca8UoHhoYcQCCWMdWvv1JEQk4IWw zq6o6j3z60LAuO5Xo0/cGbs464IjYTdi6TmNewgihH2jql4Wm82Y77oZK/EDooOGlB eMGc2kiaSaf2xB1afPNemrcDPrsfQSicoGt/sRLqFfqrNA/4Nd9fHa3WUaCnwsyWFD mxFLGAo1Q90p1rrCGEnhV+lpecMdJsmVH4zQnUw+UJqlQUJZTCtcxlx1Fn99LndgIO h0DC/DikuaHHF5DVTtOqFE1iQbx8jU5MSZAiwZezlRclQDzSsVgbdDpe+3PK5US/Gg 4EY12OzMHKtsA== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 37532154048D; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 17/20] srcu: Fix long lines in srcu_get_delay() Date: Thu, 30 Mar 2023 15:47:23 -0700 Message-Id: <20230330224726.662344-17-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835870324216693?= X-GMAIL-MSGID: =?utf-8?q?1761835870324216693?= This commit creates an srcu_usage pointer named "sup" as a shorter synonym for the "ssp->srcu_sup" that was bloating several lines of code. Signed-off-by: Paul E. McKenney Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Cc: Christoph Hellwig --- kernel/rcu/srcutree.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index f9dd6ed5503e..9699bcab7215 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -616,17 +616,18 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) unsigned long gpstart; unsigned long j; unsigned long jbase = SRCU_INTERVAL; + struct srcu_usage *sup = ssp->srcu_sup; - if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_sup->srcu_gp_seq), READ_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp))) + if (ULONG_CMP_LT(READ_ONCE(sup->srcu_gp_seq), READ_ONCE(sup->srcu_gp_seq_needed_exp))) jbase = 0; - if (rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq))) { + if (rcu_seq_state(READ_ONCE(sup->srcu_gp_seq))) { j = jiffies - 1; - gpstart = READ_ONCE(ssp->srcu_sup->srcu_gp_start); + gpstart = READ_ONCE(sup->srcu_gp_start); if (time_after(j, gpstart)) jbase += j - gpstart; if (!jbase) { - WRITE_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay, READ_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay) + 1); - if (READ_ONCE(ssp->srcu_sup->srcu_n_exp_nodelay) > srcu_max_nodelay_phase) + WRITE_ONCE(sup->srcu_n_exp_nodelay, READ_ONCE(sup->srcu_n_exp_nodelay) + 1); + if (READ_ONCE(sup->srcu_n_exp_nodelay) > srcu_max_nodelay_phase) jbase = 1; } } From patchwork Thu Mar 30 22:47:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77456 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp189787vqo; Thu, 30 Mar 2023 16:07:10 -0700 (PDT) X-Google-Smtp-Source: AKy350bBE8buemIw+3ETFXfNJ5QN+qSTCgvCY5jdekutdsAbsu4npl1GbLsUBV8wdvFh5XRBtC/j X-Received: by 2002:a17:906:eb49:b0:889:58bd:86f1 with SMTP id mc9-20020a170906eb4900b0088958bd86f1mr25688760ejb.14.1680217630001; Thu, 30 Mar 2023 16:07:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680217629; cv=none; d=google.com; s=arc-20160816; b=iMdNXBqFI3mlQMv0KACVL6OEpIJ8JlJVGrHUkqCNX631PiPW5mNB74/JcyjKBHDUUH FF9gscZTq0s5j9bpu881ViSJlG3G/CUr96PGrJcP4rxQi56xf4NSeYo4r1qLVK5Cu9N9 cAy4hfJRpez5CmwZLKKFRLsPgN2tTIBMOtakZi0Z8IeXWRxNu64qZwmiiNZrtzNIrjKC Uwom+QzZDaz0B80BKeQjaR259t/HYO5nk6h+jpCU+OGYbf99tuuWAX3XmV4MFu1CJicF uIHsSoF/xtdNwaSRkZPT6cxI29UfOFgSL6aMxLtungGhlim6vHNQikGo3rNWywC5uq5A jADw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=C+zCgC/wukaXA180AhzmMIluH96qtZR/LTmo+vkGrT4=; b=atbYyRBdqm+J9s0Fxz7u7xtAxthQXV91UACEvZ1VOvniqAjPqt2IdrG/GbDrhMRUw0 VGDJj1OInF0jhsHLN5xaNBPK6VpCEoNANZD14FohZSHUny1YINdj4ayBf5WDAjj6kj0V 4N0RwLuqWA/+UP+BeEmQTJcd947YqExypQiVWJAUvKlvreNj2hC+3P0lCxDloq6vixc7 RHTAVYFDtYYK5nufCJg+RmUpJ961MNW+EzA91tiW/s2PYCD/HlW9wFElxxh+XdybMufF 5gYZQv8fKMGdjQP6WLOQwjwWDEnA8xrIWsUgOl8/yjaPmtBknJTgoGKTQO9uujt79mkS bw4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="tsEjXci/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id m23-20020a17090679d700b0092fb3c3b260si570196ejo.334.2023.03.30.16.06.45; Thu, 30 Mar 2023 16:07:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="tsEjXci/"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231422AbjC3WtD (ORCPT + 99 others); Thu, 30 Mar 2023 18:49:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231272AbjC3WsF (ORCPT ); Thu, 30 Mar 2023 18:48:05 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAEF61166F; Thu, 30 Mar 2023 15:47:53 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 5D637621E6; Thu, 30 Mar 2023 22:47:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29920C433B3; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=QlHpKUFyFkPqa4/d+oFphqn7vcoF2BLXPAHJYhfOlxM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tsEjXci/7rR60/j145O3c8SvRw/52MFO1g+tk2nNqZ2BxdAczWh/QnCaXq29eZNUt 8zJcE2qP080KGnnwLWebT4OXzQhUKeWucIjhHqZjcDGXOnVVqI4yfysOt6bQwIvgcJ MHDE+XqP2nsClTKCXdS0rf1slDpdknEVsyVL912teyBPJAN2mBCY/1ofMCVi1M1uQN rOP7r77SgZRFS2bml2I0bVCnmUaV89XgsxdMG+S5I7Cjg1dK3XiIZsXKIcBD0NFTNO lq9uJx6HWTz3uQSdSWkWRqCRBK5gAgXGYR5h+NIQ2AwrDHskbTmUKszqXCwMmgWAsO gsGIYywzdsBgQ== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 3AA19154048E; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 18/20] srcu: Fix long lines in cleanup_srcu_struct() Date: Thu, 30 Mar 2023 15:47:24 -0700 Message-Id: <20230330224726.662344-18-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761835881393509306?= X-GMAIL-MSGID: =?utf-8?q?1761835881393509306?= This commit creates an srcu_usage pointer named "sup" as a shorter synonym for the "ssp->srcu_sup" that was bloating several lines of code. Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 21 +++++++++++---------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 9699bcab7215..11a08201ca0a 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -644,12 +644,13 @@ static unsigned long srcu_get_delay(struct srcu_struct *ssp) void cleanup_srcu_struct(struct srcu_struct *ssp) { int cpu; + struct srcu_usage *sup = ssp->srcu_sup; if (WARN_ON(!srcu_get_delay(ssp))) return; /* Just leak it! */ if (WARN_ON(srcu_readers_active(ssp))) return; /* Just leak it! */ - flush_delayed_work(&ssp->srcu_sup->work); + flush_delayed_work(&sup->work); for_each_possible_cpu(cpu) { struct srcu_data *sdp = per_cpu_ptr(ssp->sda, cpu); @@ -658,21 +659,21 @@ void cleanup_srcu_struct(struct srcu_struct *ssp) if (WARN_ON(rcu_segcblist_n_cbs(&sdp->srcu_cblist))) return; /* Forgot srcu_barrier(), so just leak it! */ } - if (WARN_ON(rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)) != SRCU_STATE_IDLE) || - WARN_ON(rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq) != ssp->srcu_sup->srcu_gp_seq_needed) || + if (WARN_ON(rcu_seq_state(READ_ONCE(sup->srcu_gp_seq)) != SRCU_STATE_IDLE) || + WARN_ON(rcu_seq_current(&sup->srcu_gp_seq) != sup->srcu_gp_seq_needed) || WARN_ON(srcu_readers_active(ssp))) { pr_info("%s: Active srcu_struct %p read state: %d gp state: %lu/%lu\n", - __func__, ssp, rcu_seq_state(READ_ONCE(ssp->srcu_sup->srcu_gp_seq)), - rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq), ssp->srcu_sup->srcu_gp_seq_needed); + __func__, ssp, rcu_seq_state(READ_ONCE(sup->srcu_gp_seq)), + rcu_seq_current(&sup->srcu_gp_seq), sup->srcu_gp_seq_needed); return; /* Caller forgot to stop doing call_srcu()? */ } - kfree(ssp->srcu_sup->node); - ssp->srcu_sup->node = NULL; - ssp->srcu_sup->srcu_size_state = SRCU_SIZE_SMALL; - if (!ssp->srcu_sup->sda_is_static) { + kfree(sup->node); + sup->node = NULL; + sup->srcu_size_state = SRCU_SIZE_SMALL; + if (!sup->sda_is_static) { free_percpu(ssp->sda); ssp->sda = NULL; - kfree(ssp->srcu_sup); + kfree(sup); ssp->srcu_sup = NULL; } } From patchwork Thu Mar 30 22:47:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77434 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp180678vqo; Thu, 30 Mar 2023 15:50:06 -0700 (PDT) X-Google-Smtp-Source: AKy350ZOi8gZabRbp6T7klFAsErh0uYder4J9x7+yKAzkliK9d12TzDIl7qGTd+uQJGrQsoUN9t7 X-Received: by 2002:aa7:dad3:0:b0:4fb:999:e04c with SMTP id x19-20020aa7dad3000000b004fb0999e04cmr24767740eds.38.1680216606230; Thu, 30 Mar 2023 15:50:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680216606; cv=none; d=google.com; s=arc-20160816; b=zxOF5GrKLuvZj0BenwRoeHumFdq0qGjEDoD12wH0SV6U06m7QnGroe7cdNWGRYZESj 5r2MtmZT+BFq+2I21TLZbRffW7c3pTA3ewCr0OAnZoU+JFtX1hF2mGAZykZPoGf4tCTH Tb00d08sTtQQWf6tFn93hcEBOMsnpE50UDFXqMSLOb/vaBzj/xbs/E3G8TeQhf+X98CP IEarf8a/l5pGMxH2p46H/tdqGzaijcwXIdCSoxWLpZHLM0Y2a+h4smnZtlWtMWVGQnBE veUoQkwmhBKCpfFHbsS/7WF5yu9jZVh5hcDWlz882RXMdKIIq1AWzSEo6OHIv/0h9Eoc XD2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=iHDzQfkvRRVQBBt5zTrrZ2mnlmDsen8WP3zUL5pDXaY=; b=ojFraLJm1UQzJ6fkUkIaJXPqVfnijyQ97D5so6YLrBEdz79WSl6hYNyhdm8H2gcqTC 6db0IdJ9mw8R6kv556sSCz4GifejJN8bmCQefvpG6IyDtz8uc5bPfQ/uUmrVb3Rq8ihS 1Qhojg4G1oQ9lIye8r4O7gBz7NOjNeB848gR4OrCZj/HfqIlCMgDckmAUtqfcPlD5noG 6Y22cS+DLy37EWtN/J9/pQGKmDZFAvR5Pd7q5UaX+SEUuqgKCurdLqof+zejsn+wR0cj MrulQVc102WL1cnU8I8rjK+P3IhDiLkzFclNOEsSHIznvRy1I7hl6hOc7nllslJ0bauh 4Yxw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pgJkhYZq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p10-20020a056402044a00b00501fe4fa80asi699831edw.561.2023.03.30.15.49.42; Thu, 30 Mar 2023 15:50:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=pgJkhYZq; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231397AbjC3Wsx (ORCPT + 99 others); Thu, 30 Mar 2023 18:48:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44848 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231176AbjC3WsB (ORCPT ); Thu, 30 Mar 2023 18:48:01 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 78A341114A; Thu, 30 Mar 2023 15:47:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 038C8621E1; Thu, 30 Mar 2023 22:47:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DFB2C43444; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=BFCXKUKL73Xf4o7sL1OPzuArpLQgxGcR88Y6JAad7z0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pgJkhYZqWCVl9xMU0FuMVBbQzujZKJ6vkuHwgdSPnoONXOiM/qS+4aipxMMZ58ECW dpWqM/3BmFJ+BDe7fE3YpAE3vdm+fkJHh/Axcrx1aSPG60kY3bzKqcOVUGY4hA6dMC nj6Gj0mzONzS8FSK23SXz59vnyhZWJ2GGrIxpU85k4y/YQhzUX1HsL9HY5nlyG3Aur +ZwJgbIoKAcIl1VDRwDqktmb77kSBmLyj3E07CqOTwE2PL6JWf9eSdJWAbOefIM2EV 12Torc+JpN+rTxShArz3Pe8BH5CsZi6kHwKNpcc9KEluBLbQvU2Hha8yPWyV79AO57 Gek9gSJBTEK+g== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 3E04C154048F; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 19/20] srcu: Fix long lines in srcu_gp_end() Date: Thu, 30 Mar 2023 15:47:25 -0700 Message-Id: <20230330224726.662344-19-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-2.5 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761834808477196755?= X-GMAIL-MSGID: =?utf-8?q?1761834808477196755?= This commit creates an srcu_usage pointer named "sup" as a shorter synonym for the "ssp->srcu_sup" that was bloating several lines of code. Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 41 +++++++++++++++++++++-------------------- 1 file changed, 21 insertions(+), 20 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 11a08201ca0a..f661a0f6bc0d 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -862,28 +862,29 @@ static void srcu_gp_end(struct srcu_struct *ssp) unsigned long sgsne; struct srcu_node *snp; int ss_state; + struct srcu_usage *sup = ssp->srcu_sup; /* Prevent more than one additional grace period. */ - mutex_lock(&ssp->srcu_sup->srcu_cb_mutex); + mutex_lock(&sup->srcu_cb_mutex); /* End the current grace period. */ - spin_lock_irq_rcu_node(ssp->srcu_sup); - idx = rcu_seq_state(ssp->srcu_sup->srcu_gp_seq); + spin_lock_irq_rcu_node(sup); + idx = rcu_seq_state(sup->srcu_gp_seq); WARN_ON_ONCE(idx != SRCU_STATE_SCAN2); - if (ULONG_CMP_LT(READ_ONCE(ssp->srcu_sup->srcu_gp_seq), READ_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp))) + if (ULONG_CMP_LT(READ_ONCE(sup->srcu_gp_seq), READ_ONCE(sup->srcu_gp_seq_needed_exp))) cbdelay = 0; - WRITE_ONCE(ssp->srcu_sup->srcu_last_gp_end, ktime_get_mono_fast_ns()); - rcu_seq_end(&ssp->srcu_sup->srcu_gp_seq); - gpseq = rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq); - if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, gpseq)) - WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, gpseq); - spin_unlock_irq_rcu_node(ssp->srcu_sup); - mutex_unlock(&ssp->srcu_sup->srcu_gp_mutex); + WRITE_ONCE(sup->srcu_last_gp_end, ktime_get_mono_fast_ns()); + rcu_seq_end(&sup->srcu_gp_seq); + gpseq = rcu_seq_current(&sup->srcu_gp_seq); + if (ULONG_CMP_LT(sup->srcu_gp_seq_needed_exp, gpseq)) + WRITE_ONCE(sup->srcu_gp_seq_needed_exp, gpseq); + spin_unlock_irq_rcu_node(sup); + mutex_unlock(&sup->srcu_gp_mutex); /* A new grace period can start at this point. But only one. */ /* Initiate callback invocation as needed. */ - ss_state = smp_load_acquire(&ssp->srcu_sup->srcu_size_state); + ss_state = smp_load_acquire(&sup->srcu_size_state); if (ss_state < SRCU_SIZE_WAIT_BARRIER) { srcu_schedule_cbs_sdp(per_cpu_ptr(ssp->sda, get_boot_cpu_id()), cbdelay); @@ -892,7 +893,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) srcu_for_each_node_breadth_first(ssp, snp) { spin_lock_irq_rcu_node(snp); cbs = false; - last_lvl = snp >= ssp->srcu_sup->level[rcu_num_lvls - 1]; + last_lvl = snp >= sup->level[rcu_num_lvls - 1]; if (last_lvl) cbs = ss_state < SRCU_SIZE_BIG || snp->srcu_have_cbs[idx] == gpseq; snp->srcu_have_cbs[idx] = gpseq; @@ -924,18 +925,18 @@ static void srcu_gp_end(struct srcu_struct *ssp) } /* Callback initiation done, allow grace periods after next. */ - mutex_unlock(&ssp->srcu_sup->srcu_cb_mutex); + mutex_unlock(&sup->srcu_cb_mutex); /* Start a new grace period if needed. */ - spin_lock_irq_rcu_node(ssp->srcu_sup); - gpseq = rcu_seq_current(&ssp->srcu_sup->srcu_gp_seq); + spin_lock_irq_rcu_node(sup); + gpseq = rcu_seq_current(&sup->srcu_gp_seq); if (!rcu_seq_state(gpseq) && - ULONG_CMP_LT(gpseq, ssp->srcu_sup->srcu_gp_seq_needed)) { + ULONG_CMP_LT(gpseq, sup->srcu_gp_seq_needed)) { srcu_gp_start(ssp); - spin_unlock_irq_rcu_node(ssp->srcu_sup); + spin_unlock_irq_rcu_node(sup); srcu_reschedule(ssp, 0); } else { - spin_unlock_irq_rcu_node(ssp->srcu_sup); + spin_unlock_irq_rcu_node(sup); } /* Transition to big if needed. */ @@ -943,7 +944,7 @@ static void srcu_gp_end(struct srcu_struct *ssp) if (ss_state == SRCU_SIZE_ALLOC) init_srcu_struct_nodes(ssp, GFP_KERNEL); else - smp_store_release(&ssp->srcu_sup->srcu_size_state, ss_state + 1); + smp_store_release(&sup->srcu_size_state, ss_state + 1); } } From patchwork Thu Mar 30 22:47:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 77458 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp198068vqo; Thu, 30 Mar 2023 16:23:43 -0700 (PDT) X-Google-Smtp-Source: AKy350bUy6u3Jd1adIDMpXbobsHH5iSGVK8Tt2s3NKSpxkjGiu/zLKkMasm2OZyEb/kHjwIhfklk X-Received: by 2002:aa7:d758:0:b0:502:233e:af49 with SMTP id a24-20020aa7d758000000b00502233eaf49mr22885437eds.4.1680218623293; Thu, 30 Mar 2023 16:23:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680218623; cv=none; d=google.com; s=arc-20160816; b=vj9RcdnLlEGzJOz38TlKZImH6hlHgnFodqJODGRpxm+ZmQUtHUL18qyBF962Yw64/R aQnEzFZq+3izHm/GzxtOSuGWwWR8RvkVr87WucCyEr6HvkkDgqenETrj2joE6Yq0GwVY v0OhohZ4kmbstSfNSaExtPllpGbXcPBmRDeyBGg08Q3NQZ4YRc5TMXSGlmt0nXY5B+4i i2Dlbq5g9pKbnywvjXNIfUXBSFJNGCzl2hJiWPANYvSeA55OGA/i7WZ48oG9rE4E0OXK 84MCin+6JvchcpDp4hgRwKYUn0SCoWr8YpdQ0UO+pUFDWWXG7SltFLRnb3RjD0wquU/I VL0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3QPo2SdKfswg2FB8DJKFIuhmy8PXYCOUiKdU6YqRaPI=; b=crsSkptJFlQRrh+G6hmlU6OmwNQU16ZpxQ7KJWEX1rOczHaxk3/2daF5DvzeL7ytih 6O7MRfqlubh6rXjA0UlUZ3w3MwDCuCgP4wejZRc4UWvvkhFQsB7MwYSeh8lbginn1pL/ sgurJqWC+iYLrdowcfTYZkqmBc+DZwoN1CTs024XQSHAlIMxsFFD9lV+GDR14TcVq9Ke qwF+a5yx9HjCA9sQx8rH9lM6ExjsFB1JwGg+CuG2Gt1/oKRltruSUE0ozuYiWPmp/9SW tL5c/jvCcDGN+N5Np7mbIdiWTzBVKYjm16XJu6JYbA+aCBMWcuJ65AXo0mhEYLXozt+F LwNA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=EP5v8NQw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u11-20020a05640207cb00b004c03a4171b1si720530edy.56.2023.03.30.16.23.17; Thu, 30 Mar 2023 16:23:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=EP5v8NQw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231444AbjC3WtN (ORCPT + 99 others); Thu, 30 Mar 2023 18:49:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45864 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231304AbjC3WsJ (ORCPT ); Thu, 30 Mar 2023 18:48:09 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4E81CC9; Thu, 30 Mar 2023 15:47:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 2367FB82A6B; Thu, 30 Mar 2023 22:47:31 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2DAA3C43443; Thu, 30 Mar 2023 22:47:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680216449; bh=BGO1b8NtiaH39dX3EQtGPIpQm+szueNs/o3Qv7qj9XA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EP5v8NQwsl9ACrUnDx4NiTZkfRkpW82qKELLxm3Pdd7z0UWzwHj4lH6tR42X+x9LG o0E/uBm2C7vdQb8qqr783QZzmNpYbXE03RR3y/9OIS8xfFymHzTILRHzhd/MwPLdiU pD/OFqORgvpskQJSpzUEBL3c/6pPrC7w764TYJCmv1DtJVaW2yW2hmGnvMZfY4L3Uc P/nW4FeKWnewhFXrM2spAP2n01pBbjDC0nnWu6wvv2t5TqYtRqBQTUHOYwcZ72rDqp 20RThlRYEBWEZfqo2d8bdWqhAPPKTUNrSYqFCtfdZrqT0XEV5RYy97RklVV3QSNKwL Xvt8i18ezgpDg== Received: by paulmck-ThinkPad-P72.home (Postfix, from userid 1000) id 418C41540490; Thu, 30 Mar 2023 15:47:28 -0700 (PDT) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, hch@lst.de, "Paul E. McKenney" , Sachin Sant , "Zhang, Qiang1" Subject: [PATCH rcu 20/20] srcu: Fix long lines in srcu_funnel_gp_start() Date: Thu, 30 Mar 2023 15:47:26 -0700 Message-Id: <20230330224726.662344-20-paulmck@kernel.org> X-Mailer: git-send-email 2.40.0.rc2 In-Reply-To: References: MIME-Version: 1.0 X-Spam-Status: No, score=-5.2 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI,SPF_HELO_NONE, SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761836923303164652?= X-GMAIL-MSGID: =?utf-8?q?1761836923303164652?= This commit creates an srcu_usage pointer named "sup" as a shorter synonym for the "ssp->srcu_sup" that was bloating several lines of code. Cc: Christoph Hellwig Tested-by: Sachin Sant Tested-by: "Zhang, Qiang1" Signed-off-by: Paul E. McKenney --- kernel/rcu/srcutree.c | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index f661a0f6bc0d..a887cfc89894 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1004,9 +1004,10 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, struct srcu_node *snp; struct srcu_node *snp_leaf; unsigned long snp_seq; + struct srcu_usage *sup = ssp->srcu_sup; /* Ensure that snp node tree is fully initialized before traversing it */ - if (smp_load_acquire(&ssp->srcu_sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) + if (smp_load_acquire(&sup->srcu_size_state) < SRCU_SIZE_WAIT_BARRIER) snp_leaf = NULL; else snp_leaf = sdp->mynode; @@ -1014,7 +1015,7 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, if (snp_leaf) /* Each pass through the loop does one level of the srcu_node tree. */ for (snp = snp_leaf; snp != NULL; snp = snp->srcu_parent) { - if (WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) && snp != snp_leaf) + if (WARN_ON_ONCE(rcu_seq_done(&sup->srcu_gp_seq, s)) && snp != snp_leaf) return; /* GP already done and CBs recorded. */ spin_lock_irqsave_rcu_node(snp, flags); snp_seq = snp->srcu_have_cbs[idx]; @@ -1041,20 +1042,20 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, /* Top of tree, must ensure the grace period will be started. */ spin_lock_irqsave_ssp_contention(ssp, &flags); - if (ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed, s)) { + if (ULONG_CMP_LT(sup->srcu_gp_seq_needed, s)) { /* * Record need for grace period s. Pair with load * acquire setting up for initialization. */ - smp_store_release(&ssp->srcu_sup->srcu_gp_seq_needed, s); /*^^^*/ + smp_store_release(&sup->srcu_gp_seq_needed, s); /*^^^*/ } - if (!do_norm && ULONG_CMP_LT(ssp->srcu_sup->srcu_gp_seq_needed_exp, s)) - WRITE_ONCE(ssp->srcu_sup->srcu_gp_seq_needed_exp, s); + if (!do_norm && ULONG_CMP_LT(sup->srcu_gp_seq_needed_exp, s)) + WRITE_ONCE(sup->srcu_gp_seq_needed_exp, s); /* If grace period not already in progress, start it. */ - if (!WARN_ON_ONCE(rcu_seq_done(&ssp->srcu_sup->srcu_gp_seq, s)) && - rcu_seq_state(ssp->srcu_sup->srcu_gp_seq) == SRCU_STATE_IDLE) { - WARN_ON_ONCE(ULONG_CMP_GE(ssp->srcu_sup->srcu_gp_seq, ssp->srcu_sup->srcu_gp_seq_needed)); + if (!WARN_ON_ONCE(rcu_seq_done(&sup->srcu_gp_seq, s)) && + rcu_seq_state(sup->srcu_gp_seq) == SRCU_STATE_IDLE) { + WARN_ON_ONCE(ULONG_CMP_GE(sup->srcu_gp_seq, sup->srcu_gp_seq_needed)); srcu_gp_start(ssp); // And how can that list_add() in the "else" clause @@ -1063,12 +1064,12 @@ static void srcu_funnel_gp_start(struct srcu_struct *ssp, struct srcu_data *sdp, // can only be executed during early boot when there is only // the one boot CPU running with interrupts still disabled. if (likely(srcu_init_done)) - queue_delayed_work(rcu_gp_wq, &ssp->srcu_sup->work, + queue_delayed_work(rcu_gp_wq, &sup->work, !!srcu_get_delay(ssp)); - else if (list_empty(&ssp->srcu_sup->work.work.entry)) - list_add(&ssp->srcu_sup->work.work.entry, &srcu_boot_list); + else if (list_empty(&sup->work.work.entry)) + list_add(&sup->work.work.entry, &srcu_boot_list); } - spin_unlock_irqrestore_rcu_node(ssp->srcu_sup, flags); + spin_unlock_irqrestore_rcu_node(sup, flags); } /*