From patchwork Thu Jan 5 00:24:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39198 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp43467wrt; Wed, 4 Jan 2023 16:34:55 -0800 (PST) X-Google-Smtp-Source: AMrXdXtxe6ZdFEvyU3dsgCp4kmqcVBu3vmCeW/mejnFKF+4b5gOca1q7Bgn/bDGf+pTRtWFnBz1z X-Received: by 2002:a17:90b:896:b0:226:9522:3cd5 with SMTP id bj22-20020a17090b089600b0022695223cd5mr7722925pjb.16.1672878895105; Wed, 04 Jan 2023 16:34:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878895; cv=none; d=google.com; s=arc-20160816; b=hWZpxiemrnmgFYBgjufQU3DeLw//quXP0x8iNXLDqpYV4gJOB8r+uXqszcT+V7FYXW iRyWtZmNk0xOaqwRHmGW7oLOSQfb8Gv9OFm3r5fn5G4jSG59phqpFzfBh0S3/Sjq0aHw GArynZn3XywwwEYIRCxE1diGIegHlIdI3LbPDnaHNOGUGXd5Y5BhbvW0Amtl6c2zHNlp VtstkbAfxytrF9FTj3ChtMyBNgZ69TrGKQk1DT8MLgYgtFSZx9xCkv/zB7W48OEV/lwR Q8f+HLSFbRa9jV34H3cmFP8MiSdXlWdT9qEVdXiCa9rW6lRMXH0iJOt3Cu8HUlDpzvZF LwGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NPn5JTFy4ndyGbFhv2xNYS1HOQeqOdjo731AQcHvBl4=; b=i/q7p9Hyv6RFFKV1eHYEtrpC1DiOItGLtqPqhMScZ5wdFLYVXfXW/mof5L+vFNupHg Zt+eRGwfhpA5l0mQOx1yXmJeob+XK7MBzqBmGag8C8eYx7M9NrT9IpPQVOIpXWBuwF/4 3C5XoUMrDYSnEDWZfhda4JIhywHAbyo9Xq4fLe+CAuvVegDXEt9nyYqdvh8/+a7Gwmtr eRycLeoDGmvCG4Wsi2OJ3QYR6WU8E6WucyXjMiAVQmhC+Gh93YcRsbp2HVuPWhm+3Kcr BYJNfW3k7Zs81gXkPxonwLvh4XMBVgjJEmiZdWMFaEvXXv3tqKhhMv4HmjSk48N0cDKm hkaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=K5v1H8hx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nl12-20020a17090b384c00b00219fb54c505si400572pjb.171.2023.01.04.16.34.42; Wed, 04 Jan 2023 16:34:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=K5v1H8hx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240077AbjAEA0l (ORCPT + 99 others); Wed, 4 Jan 2023 19:26:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33366 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235470AbjAEA0F (ORCPT ); Wed, 4 Jan 2023 19:26:05 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A816950F50; Wed, 4 Jan 2023 16:24:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id CAF906185C; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2FC00C433EF; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=VS7CzfrucTZq5lKAwKfv+JBikA9PozXDvQDOAqrBc+k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K5v1H8hxvIHVoAXudNisEdfAmk7CCcswplwyigcFl5Iyls+TsNFn+cRC0ivT/6gca xO6GmJB+eroJwEBbCMVO5LlqSN4W88CpNEi5vmgFC3EayDQG3sagne/KWAF8i9YymN A6VVOPyFbnkjnQjnuT8aVQ8VrJ22KdakoVC0Q1cBpxqDikMenmJ+1dqLr3jPz37+VV GUvGdj0rLTNoXcPgWZrm9ERL1hcxTBptGzvMxAFPwSogs03XfKsks8Z5cUHVSust4j +NbUTgS2iAsG3lH9oTEfzzMVN0YxCn27z1pH+yDrAhLrkOzjHjx55yXwqsyUiOVqao /awWtb/YBvDtQ== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id DEEDD5C05CA; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 1/8] rcu: Refactor kvfree_call_rcu() and high-level helpers Date: Wed, 4 Jan 2023 16:24:41 -0800 Message-Id: <20230105002448.1768892-1-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140659903526829?= X-GMAIL-MSGID: =?utf-8?q?1754140659903526829?= From: "Uladzislau Rezki (Sony)" Currently a kvfree_call_rcu() takes an offset within a structure as a second parameter, so a helper such as a kvfree_rcu_arg_2() has to convert rcu_head and a freed ptr to an offset in order to pass it. That leads to an extra conversion on macro entry. Instead of converting, refactor the code in way that a pointer that has to be freed is passed directly to the kvfree_call_rcu(). This patch does not make any functional change and is transparent to all kvfree_rcu() users. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- include/linux/rcupdate.h | 5 ++--- include/linux/rcutiny.h | 12 ++++++------ include/linux/rcutree.h | 2 +- kernel/rcu/tiny.c | 9 +++------ kernel/rcu/tree.c | 29 ++++++++++++----------------- 5 files changed, 24 insertions(+), 33 deletions(-) diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index 03abf883a281b..f38d4469d7f30 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -1011,8 +1011,7 @@ do { \ \ if (___p) { \ BUILD_BUG_ON(!__is_kvfree_rcu_offset(offsetof(typeof(*(ptr)), rhf))); \ - kvfree_call_rcu(&((___p)->rhf), (rcu_callback_t)(unsigned long) \ - (offsetof(typeof(*(ptr)), rhf))); \ + kvfree_call_rcu(&((___p)->rhf), (void *) (___p)); \ } \ } while (0) @@ -1021,7 +1020,7 @@ do { \ typeof(ptr) ___p = (ptr); \ \ if (___p) \ - kvfree_call_rcu(NULL, (rcu_callback_t) (___p)); \ + kvfree_call_rcu(NULL, (void *) (___p)); \ } while (0) /* diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h index 68f9070aa1110..7f17acf29dda7 100644 --- a/include/linux/rcutiny.h +++ b/include/linux/rcutiny.h @@ -98,25 +98,25 @@ static inline void synchronize_rcu_expedited(void) */ extern void kvfree(const void *addr); -static inline void __kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +static inline void __kvfree_call_rcu(struct rcu_head *head, void *ptr) { if (head) { - call_rcu(head, func); + call_rcu(head, (rcu_callback_t) ((void *) head - ptr)); return; } // kvfree_rcu(one_arg) call. might_sleep(); synchronize_rcu(); - kvfree((void *) func); + kvfree(ptr); } #ifdef CONFIG_KASAN_GENERIC -void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func); +void kvfree_call_rcu(struct rcu_head *head, void *ptr); #else -static inline void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +static inline void kvfree_call_rcu(struct rcu_head *head, void *ptr) { - __kvfree_call_rcu(head, func); + __kvfree_call_rcu(head, ptr); } #endif diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h index 4003bf6cfa1c2..56bccb5a8fdea 100644 --- a/include/linux/rcutree.h +++ b/include/linux/rcutree.h @@ -33,7 +33,7 @@ static inline void rcu_virt_note_context_switch(void) } void synchronize_rcu_expedited(void); -void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func); +void kvfree_call_rcu(struct rcu_head *head, void *ptr); void rcu_barrier(void); bool rcu_eqs_special_set(int cpu); diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index 72913ce21258b..42f7589e51e09 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -246,15 +246,12 @@ bool poll_state_synchronize_rcu(unsigned long oldstate) EXPORT_SYMBOL_GPL(poll_state_synchronize_rcu); #ifdef CONFIG_KASAN_GENERIC -void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +void kvfree_call_rcu(struct rcu_head *head, void *ptr) { - if (head) { - void *ptr = (void *) head - (unsigned long) func; - + if (head) kasan_record_aux_stack_noalloc(ptr); - } - __kvfree_call_rcu(head, func); + __kvfree_call_rcu(head, ptr); } EXPORT_SYMBOL_GPL(kvfree_call_rcu); #endif diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index cf34a961821ad..7d222acd85bfd 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3103,8 +3103,8 @@ static void kfree_rcu_work(struct work_struct *work) * This list is named "Channel 3". */ for (; head; head = next) { - unsigned long offset = (unsigned long)head->func; - void *ptr = (void *)head - offset; + void *ptr = (void *) head->func; + unsigned long offset = (void *) head - ptr; next = head->next; debug_rcu_head_unqueue((struct rcu_head *)ptr); @@ -3342,26 +3342,21 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, * be free'd in workqueue context. This allows us to: batch requests together to * reduce the number of grace periods during heavy kfree_rcu()/kvfree_rcu() load. */ -void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) +void kvfree_call_rcu(struct rcu_head *head, void *ptr) { unsigned long flags; struct kfree_rcu_cpu *krcp; bool success; - void *ptr; - if (head) { - ptr = (void *) head - (unsigned long) func; - } else { - /* - * Please note there is a limitation for the head-less - * variant, that is why there is a clear rule for such - * objects: it can be used from might_sleep() context - * only. For other places please embed an rcu_head to - * your data. - */ + /* + * Please note there is a limitation for the head-less + * variant, that is why there is a clear rule for such + * objects: it can be used from might_sleep() context + * only. For other places please embed an rcu_head to + * your data. + */ + if (!head) might_sleep(); - ptr = (unsigned long *) func; - } // Queue the object but don't yet schedule the batch. if (debug_rcu_head_queue(ptr)) { @@ -3382,7 +3377,7 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) // Inline if kvfree_rcu(one_arg) call. goto unlock_return; - head->func = func; + head->func = ptr; head->next = krcp->head; krcp->head = head; success = true; From patchwork Thu Jan 5 00:24:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39193 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp42919wrt; Wed, 4 Jan 2023 16:33:08 -0800 (PST) X-Google-Smtp-Source: AMrXdXuK841P/CzCOZCnAdOdP1fxgDhCJy8OQzGUal2K636cY+fqV6p6iBdHiMFQnanlegIYoRwc X-Received: by 2002:a05:6a00:d1:b0:580:eb71:40f0 with SMTP id e17-20020a056a0000d100b00580eb7140f0mr43750087pfj.23.1672878788573; Wed, 04 Jan 2023 16:33:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878788; cv=none; d=google.com; s=arc-20160816; b=VSN4vjeRt+ktrpY/8s3dMki/CWt3/jkyz8u92/41e/zUcSrGiC3kghv8xO+tmj6Peh BwWmWit3MJxv0cmBTBSpwz5YvxA/ps9iXTI9ndTVZo9nRjxPhiVAYMJ2idDSJ2LMMwYY xYXTSwP7nC+h8WoTli2muFhQknCe4Yl9PmwUs6792jug/ZscN1uTL82wP1ZUSEm1cNLH SWPbau/LHV2pp6xbtUL2BZVlNDsfKUbnCROlo8uD5CdE/R8l2mmYrmeWn6824TmLl+lN NBqt5NRGIroiryNvQtEZQdET8HVLZ5OdS+HPApL18Rqu38w2gDmK5KebT2CbxnfZlme3 /rNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=oc3or35xHB2xHwawvNQCdAcQO6X2U/ySC6lT3ETt6Ps=; b=nw0IRztZQhUIOjJkgFpjPO10bv1i7EwUCo85q5CZg6X1xsEIMPuICRMZ5y5VTY08Lo Q4tRZxpaefnOjNwDOJJmQMx+wUnqfblv8aLvtyh9U4UnWZisVS980AACKXox6blZ3aKs d8XK6M/J2x3zptrn24etIRIxt17jeDOSRTy2l4/ZcdG/EXC0H30EEng+FDb/fV9QQsuI j7K226LGzV43r1yOpjFqogzBiqIV8IscKtTvQroU5gzpNmycWQCPr+19tmaKcHTq/s8v HseBPnqZXdYmsSXL3Qg0viexWWPi/rE4Ws63WUpIvcUIqy1KGi/S09JC4Nm8c3cA4ZOo J88w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OFXK80sE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id c16-20020a056a00249000b0056c12518664si37020907pfv.128.2023.01.04.16.32.55; Wed, 04 Jan 2023 16:33:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=OFXK80sE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240343AbjAEA0q (ORCPT + 99 others); Wed, 4 Jan 2023 19:26:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235494AbjAEA0G (ORCPT ); Wed, 4 Jan 2023 19:26:06 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F3654FD78; Wed, 4 Jan 2023 16:24:52 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DC16761880; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3330EC433D2; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=9q3RXJaEMI0bXFO/vylOhYFyrOVZOPbjbOVjXgIBpGc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OFXK80sEV5h+vkOB8PnwWECzrbUYp/b297NgAZahEHq44//vzGeqCFZtKKqfathpL qmXHwFA/fOFCI2EzaUykt6N6qrfDAEDh7ntVvMBEW3laHX2zk+cSpEzkrffK+yPywC gnGwKMAmwG5HeM5QA4YeK/DuCLWOMvgg5eZR2esF0Jmj8IJrAnxBVB8ys+mm7HCfzL 3wm0NOhK68AWOJncmc/oEYVMykLfcH1G2oQhufRoGw42KFvRQI4J9+1OvaVYOk6oYb ZmJkeg6GA54gLhya4oyXN7EEDpp2+MBdfMocrTFTTxhC+UoU8PV3fQn1gMJ01BEZH0 TD+2y0KCVSTTA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E27975C086D; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 2/8] rcu/kvfree: Switch to a generic linked list API Date: Wed, 4 Jan 2023 16:24:42 -0800 Message-Id: <20230105002448.1768892-2-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140548628873330?= X-GMAIL-MSGID: =?utf-8?q?1754140548628873330?= From: "Uladzislau Rezki (Sony)" This commit improves the readability and maintainability of the kvfree_rcu() code by switching from an open-coded linked list to the standard Linux-kernel circular doubly linked list. This patch does not introduce any functional change. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 89 +++++++++++++++++++++++------------------------ 1 file changed, 43 insertions(+), 46 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7d222acd85bfd..4088b34ce9610 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2876,13 +2876,13 @@ EXPORT_SYMBOL_GPL(call_rcu); /** * struct kvfree_rcu_bulk_data - single block to store kvfree_rcu() pointers + * @list: List node. All blocks are linked between each other * @nr_records: Number of active pointers in the array - * @next: Next bulk object in the block chain * @records: Array of the kvfree_rcu() pointers */ struct kvfree_rcu_bulk_data { + struct list_head list; unsigned long nr_records; - struct kvfree_rcu_bulk_data *next; void *records[]; }; @@ -2898,21 +2898,21 @@ struct kvfree_rcu_bulk_data { * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period - * @bkvhead_free: Bulk-List of kvfree_rcu() objects waiting for a grace period + * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ struct kfree_rcu_cpu_work { struct rcu_work rcu_work; struct rcu_head *head_free; - struct kvfree_rcu_bulk_data *bkvhead_free[FREE_N_CHANNELS]; + struct list_head bulk_head_free[FREE_N_CHANNELS]; struct kfree_rcu_cpu *krcp; }; /** * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period * @head: List of kfree_rcu() objects not yet waiting for a grace period - * @bkvhead: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period + * @bulk_head: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period * @lock: Synchronize access to this structure * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES @@ -2936,7 +2936,7 @@ struct kfree_rcu_cpu_work { */ struct kfree_rcu_cpu { struct rcu_head *head; - struct kvfree_rcu_bulk_data *bkvhead[FREE_N_CHANNELS]; + struct list_head bulk_head[FREE_N_CHANNELS]; struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; raw_spinlock_t lock; struct delayed_work monitor_work; @@ -3031,12 +3031,13 @@ drain_page_cache(struct kfree_rcu_cpu *krcp) /* * This function is invoked in workqueue context after a grace period. - * It frees all the objects queued on ->bkvhead_free or ->head_free. + * It frees all the objects queued on ->bulk_head_free or ->head_free. */ static void kfree_rcu_work(struct work_struct *work) { unsigned long flags; - struct kvfree_rcu_bulk_data *bkvhead[FREE_N_CHANNELS], *bnext; + struct kvfree_rcu_bulk_data *bnode, *n; + struct list_head bulk_head[FREE_N_CHANNELS]; struct rcu_head *head, *next; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; @@ -3048,10 +3049,8 @@ static void kfree_rcu_work(struct work_struct *work) raw_spin_lock_irqsave(&krcp->lock, flags); // Channels 1 and 2. - for (i = 0; i < FREE_N_CHANNELS; i++) { - bkvhead[i] = krwp->bkvhead_free[i]; - krwp->bkvhead_free[i] = NULL; - } + for (i = 0; i < FREE_N_CHANNELS; i++) + list_replace_init(&krwp->bulk_head_free[i], &bulk_head[i]); // Channel 3. head = krwp->head_free; @@ -3060,36 +3059,33 @@ static void kfree_rcu_work(struct work_struct *work) // Handle the first two channels. for (i = 0; i < FREE_N_CHANNELS; i++) { - for (; bkvhead[i]; bkvhead[i] = bnext) { - bnext = bkvhead[i]->next; - debug_rcu_bhead_unqueue(bkvhead[i]); + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) { + debug_rcu_bhead_unqueue(bnode); rcu_lock_acquire(&rcu_callback_map); if (i == 0) { // kmalloc() / kfree(). trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bkvhead[i]->nr_records, - bkvhead[i]->records); + rcu_state.name, bnode->nr_records, + bnode->records); - kfree_bulk(bkvhead[i]->nr_records, - bkvhead[i]->records); + kfree_bulk(bnode->nr_records, bnode->records); } else { // vmalloc() / vfree(). - for (j = 0; j < bkvhead[i]->nr_records; j++) { + for (j = 0; j < bnode->nr_records; j++) { trace_rcu_invoke_kvfree_callback( - rcu_state.name, - bkvhead[i]->records[j], 0); + rcu_state.name, bnode->records[j], 0); - vfree(bkvhead[i]->records[j]); + vfree(bnode->records[j]); } } rcu_lock_release(&rcu_callback_map); raw_spin_lock_irqsave(&krcp->lock, flags); - if (put_cached_bnode(krcp, bkvhead[i])) - bkvhead[i] = NULL; + if (put_cached_bnode(krcp, bnode)) + bnode = NULL; raw_spin_unlock_irqrestore(&krcp->lock, flags); - if (bkvhead[i]) - free_page((unsigned long) bkvhead[i]); + if (bnode) + free_page((unsigned long) bnode); cond_resched_tasks_rcu_qs(); } @@ -3125,7 +3121,7 @@ need_offload_krc(struct kfree_rcu_cpu *krcp) int i; for (i = 0; i < FREE_N_CHANNELS; i++) - if (krcp->bkvhead[i]) + if (!list_empty(&krcp->bulk_head[i])) return true; return !!krcp->head; @@ -3162,21 +3158,20 @@ static void kfree_rcu_monitor(struct work_struct *work) for (i = 0; i < KFREE_N_BATCHES; i++) { struct kfree_rcu_cpu_work *krwp = &(krcp->krw_arr[i]); - // Try to detach bkvhead or head and attach it over any + // Try to detach bulk_head or head and attach it over any // available corresponding free channel. It can be that // a previous RCU batch is in progress, it means that // immediately to queue another one is not possible so // in that case the monitor work is rearmed. - if ((krcp->bkvhead[0] && !krwp->bkvhead_free[0]) || - (krcp->bkvhead[1] && !krwp->bkvhead_free[1]) || + if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) || + (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) || (krcp->head && !krwp->head_free)) { + // Channel 1 corresponds to the SLAB-pointer bulk path. // Channel 2 corresponds to vmalloc-pointer bulk path. for (j = 0; j < FREE_N_CHANNELS; j++) { - if (!krwp->bkvhead_free[j]) { - krwp->bkvhead_free[j] = krcp->bkvhead[j]; - krcp->bkvhead[j] = NULL; - } + if (list_empty(&krwp->bulk_head_free[j])) + list_replace_init(&krcp->bulk_head[j], &krwp->bulk_head_free[j]); } // Channel 3 corresponds to both SLAB and vmalloc @@ -3288,10 +3283,11 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, return false; idx = !!is_vmalloc_addr(ptr); + bnode = list_first_entry_or_null(&(*krcp)->bulk_head[idx], + struct kvfree_rcu_bulk_data, list); /* Check if a new block is required. */ - if (!(*krcp)->bkvhead[idx] || - (*krcp)->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) { + if (!bnode || bnode->nr_records == KVFREE_BULK_MAX_ENTR) { bnode = get_cached_bnode(*krcp); if (!bnode && can_alloc) { krc_this_cpu_unlock(*krcp, *flags); @@ -3315,18 +3311,13 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, if (!bnode) return false; - /* Initialize the new block. */ + // Initialize the new block and attach it. bnode->nr_records = 0; - bnode->next = (*krcp)->bkvhead[idx]; - - /* Attach it to the head. */ - (*krcp)->bkvhead[idx] = bnode; + list_add(&bnode->list, &(*krcp)->bulk_head[idx]); } /* Finally insert. */ - (*krcp)->bkvhead[idx]->records - [(*krcp)->bkvhead[idx]->nr_records++] = ptr; - + bnode->records[bnode->nr_records++] = ptr; return true; } @@ -4761,7 +4752,7 @@ struct workqueue_struct *rcu_gp_wq; static void __init kfree_rcu_batch_init(void) { int cpu; - int i; + int i, j; /* Clamp it to [0:100] seconds interval. */ if (rcu_delay_page_cache_fill_msec < 0 || @@ -4781,8 +4772,14 @@ static void __init kfree_rcu_batch_init(void) for (i = 0; i < KFREE_N_BATCHES; i++) { INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; + + for (j = 0; j < FREE_N_CHANNELS; j++) + INIT_LIST_HEAD(&krcp->krw_arr[i].bulk_head_free[j]); } + for (i = 0; i < FREE_N_CHANNELS; i++) + INIT_LIST_HEAD(&krcp->bulk_head[i]); + INIT_DELAYED_WORK(&krcp->monitor_work, kfree_rcu_monitor); INIT_DELAYED_WORK(&krcp->page_cache_work, fill_page_cache_func); krcp->initialized = true; From patchwork Thu Jan 5 00:24:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39197 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp43409wrt; Wed, 4 Jan 2023 16:34:42 -0800 (PST) X-Google-Smtp-Source: AMrXdXv0IXSo+TbbtGvrpqCngh3SuyNpjrwJ97ZDiRCi0cLk33cvAPjw8WoqoQH4/x7JQd+dCm3h X-Received: by 2002:a17:90a:c24a:b0:225:f3e6:424e with SMTP id d10-20020a17090ac24a00b00225f3e6424emr36094683pjx.17.1672878881668; Wed, 04 Jan 2023 16:34:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878881; cv=none; d=google.com; s=arc-20160816; b=edyriTFMZl1lizfHSOG8JIUvFifVPgq1vYPkgZtWaqxkj2A6bN9wXb/UQyjYEC9XDl KYD72ha/zsMoAHvTKYVPKkUAN50G/zH1mmGnIEXZQrm+wPW6V5EddBVK1FpIZBNrkchL mQspr/rzuGyKiLBQS8UyZGjjt4RjcL/tmo5Z7hFlKKihR/GXiXHmc1xY6X/ooL+W7r8i yBYG93EZNhsp5mGdzRqQ7YB3kHAIAmJ+kKYHPsyd5kJbeuJgy08Xmyeb+lUZs75gWlN7 EjMm/9K34uwsKSiF+t7sBkzqujuGZU+PUTYYnITa6Vpw3GRd9a8o5KMNjxDk6mMtPD9y ZmXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=e3zVlcz1O2lYSyPqUMCTOF3WDJT2tNTYOvd/8ZOwPnk=; b=xJtiDG5T0EaPfF2Rh6lcrbK+ubu7ypZyMoSBWOv2MzyrluR6CTzla+Rrt1shzSWar/ lT9Q1P5jgMq2VcBwTpqy8YZPg7TPdpE+1yukLY+jJWLXPnCutZqSUDtY+WGrbcvf4+OS a9QixiwCjSHrpKFEWnnNT5JxWuiP36qmM47ZGbUvyujGjuKIaBj3zIIoTtIbi6t0hJbg 2xPKvs4OROSVkmBP9Hil7TnW5QL4h7KEQBFYMjYHFC3Wg7ZETZR0/V0AM1Yr+nrb4EMT w83lB3tPIBZb202Bmb5ZxYARSQ5xwGyITKiqOoQ3FpqfTDy44dnkCRyLggwcJ0aeQJbH wNAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=YUYIXIZm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b13-20020a6541cd000000b004a7490f17fdsi951886pgq.743.2023.01.04.16.34.29; Wed, 04 Jan 2023 16:34:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=YUYIXIZm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235773AbjAEA0i (ORCPT + 99 others); Wed, 4 Jan 2023 19:26:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240715AbjAEA0A (ORCPT ); Wed, 4 Jan 2023 19:26:00 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5EE124FD70; Wed, 4 Jan 2023 16:24:51 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E74D261890; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 386EBC433F1; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=eyuMolz7jRjywZspmtRUtHr0uH/ZCSEztevsvhEK960=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YUYIXIZmmQrjC53VvOnPo8Dx0Onugn4spzyTZp9jWjZ4jlWVjX2fkf1SyTzyQwTiN igY6a0u3zn4v6V6l3dJEMtbCYk8zyRnBpDiXZ5AXWYFmpBGoZoAcO4Fnz2lEexjjSb mwv3y7UxZ6AewUICz2SD5mlyZM89fWkb2l6I+oE+YTx9Ttv55wK50ZmjqVfgrHIbAV HMmvTtw4pwCeP5b08DEc08BUFb5dLseMq4LAHrPtqLzfGLwdV4Mnfi+bZqYfToNXkf GDfe0VW4KlKcl0Yz/xFws6KKHfhf8S+xPNwmMCxjj8ymC/eduZSFlQ9h+dESn863wj 1UqwotAmTspWA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E536D5C08E5; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 3/8] rcu/kvfree: Move bulk/list reclaim to separate functions Date: Wed, 4 Jan 2023 16:24:43 -0800 Message-Id: <20230105002448.1768892-3-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140646019560471?= X-GMAIL-MSGID: =?utf-8?q?1754140646019560471?= From: "Uladzislau Rezki (Sony)" The kvfree_rcu() code maintains lists of pages of pointers, but also a singly linked list, with the latter being used when memory allocation fails. Traversal of these two types of lists is currently open coded. This commit simplifies the code by providing kvfree_rcu_bulk() and kvfree_rcu_list() functions, respectively, to traverse these two types of lists. This patch does not introduce any functional change. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 114 ++++++++++++++++++++++++++-------------------- 1 file changed, 65 insertions(+), 49 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 4088b34ce9610..839e617f6c370 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3029,6 +3029,65 @@ drain_page_cache(struct kfree_rcu_cpu *krcp) return freed; } +static void +kvfree_rcu_bulk(struct kfree_rcu_cpu *krcp, + struct kvfree_rcu_bulk_data *bnode, int idx) +{ + unsigned long flags; + int i; + + debug_rcu_bhead_unqueue(bnode); + + rcu_lock_acquire(&rcu_callback_map); + if (idx == 0) { // kmalloc() / kfree(). + trace_rcu_invoke_kfree_bulk_callback( + rcu_state.name, bnode->nr_records, + bnode->records); + + kfree_bulk(bnode->nr_records, bnode->records); + } else { // vmalloc() / vfree(). + for (i = 0; i < bnode->nr_records; i++) { + trace_rcu_invoke_kvfree_callback( + rcu_state.name, bnode->records[i], 0); + + vfree(bnode->records[i]); + } + } + rcu_lock_release(&rcu_callback_map); + + raw_spin_lock_irqsave(&krcp->lock, flags); + if (put_cached_bnode(krcp, bnode)) + bnode = NULL; + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + if (bnode) + free_page((unsigned long) bnode); + + cond_resched_tasks_rcu_qs(); +} + +static void +kvfree_rcu_list(struct rcu_head *head) +{ + struct rcu_head *next; + + for (; head; head = next) { + void *ptr = (void *) head->func; + unsigned long offset = (void *) head - ptr; + + next = head->next; + debug_rcu_head_unqueue((struct rcu_head *)ptr); + rcu_lock_acquire(&rcu_callback_map); + trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); + + if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) + kvfree(ptr); + + rcu_lock_release(&rcu_callback_map); + cond_resched_tasks_rcu_qs(); + } +} + /* * This function is invoked in workqueue context after a grace period. * It frees all the objects queued on ->bulk_head_free or ->head_free. @@ -3038,10 +3097,10 @@ static void kfree_rcu_work(struct work_struct *work) unsigned long flags; struct kvfree_rcu_bulk_data *bnode, *n; struct list_head bulk_head[FREE_N_CHANNELS]; - struct rcu_head *head, *next; + struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; - int i, j; + int i; krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); @@ -3058,38 +3117,9 @@ static void kfree_rcu_work(struct work_struct *work) raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. - for (i = 0; i < FREE_N_CHANNELS; i++) { - list_for_each_entry_safe(bnode, n, &bulk_head[i], list) { - debug_rcu_bhead_unqueue(bnode); - - rcu_lock_acquire(&rcu_callback_map); - if (i == 0) { // kmalloc() / kfree(). - trace_rcu_invoke_kfree_bulk_callback( - rcu_state.name, bnode->nr_records, - bnode->records); - - kfree_bulk(bnode->nr_records, bnode->records); - } else { // vmalloc() / vfree(). - for (j = 0; j < bnode->nr_records; j++) { - trace_rcu_invoke_kvfree_callback( - rcu_state.name, bnode->records[j], 0); - - vfree(bnode->records[j]); - } - } - rcu_lock_release(&rcu_callback_map); - - raw_spin_lock_irqsave(&krcp->lock, flags); - if (put_cached_bnode(krcp, bnode)) - bnode = NULL; - raw_spin_unlock_irqrestore(&krcp->lock, flags); - - if (bnode) - free_page((unsigned long) bnode); - - cond_resched_tasks_rcu_qs(); - } - } + for (i = 0; i < FREE_N_CHANNELS; i++) + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) + kvfree_rcu_bulk(krcp, bnode, i); /* * This is used when the "bulk" path can not be used for the @@ -3098,21 +3128,7 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - for (; head; head = next) { - void *ptr = (void *) head->func; - unsigned long offset = (void *) head - ptr; - - next = head->next; - debug_rcu_head_unqueue((struct rcu_head *)ptr); - rcu_lock_acquire(&rcu_callback_map); - trace_rcu_invoke_kvfree_callback(rcu_state.name, head, offset); - - if (!WARN_ON_ONCE(!__is_kvfree_rcu_offset(offset))) - kvfree(ptr); - - rcu_lock_release(&rcu_callback_map); - cond_resched_tasks_rcu_qs(); - } + kvfree_rcu_list(head); } static bool From patchwork Thu Jan 5 00:24:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39196 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp43327wrt; Wed, 4 Jan 2023 16:34:23 -0800 (PST) X-Google-Smtp-Source: AMrXdXuqwnjg7u+C/5bEpp9tWHnL4llwWjLLz9+AtDNR37SQMz0x0A8duxfF/7vVoXLLwNSwyklD X-Received: by 2002:a17:902:74cc:b0:189:dd9b:66c3 with SMTP id f12-20020a17090274cc00b00189dd9b66c3mr39641726plt.11.1672878863455; Wed, 04 Jan 2023 16:34:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878863; cv=none; d=google.com; s=arc-20160816; b=gFTkg7bJaWbQW1Fl2nQYiDxPKL+C6/bex7j5G9AwIQJuW0JIUd6Gp6trnXDsWg5zMQ +GozFq6hsnIt2/6NSsaAIjL6KkZy0R3aj8wZLlEPbJUXiuhqUKubX7P/k3/L5lw+THpA 7sjT6nfY3IWpuwnaZJTh/bBhDn6wNt9eW091gw6/seADKL/Z6uyjqJ1ST8pTVtJPb6XZ AzgKFoWTMoaklPegcIdZev/Fd1sSgWDO6ODz2SSns0kcofUaSMfasAwiPP2oPVndVu/O 6RGlfRYgWIOURPOUBqFTRQNebdcZBSB3Fx8jyVj1RcLUL8tEyx48svcASDNdz63jgGcA IYwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ExOyZxtIi6YQiljqqS/XORFxQsch8+rVEoekTEsTifk=; b=swMkamt5TL0nzzhZ3uG4waOLdbqb3SvVJL9PGApsclS+9TusU5z7uEOxTc5WvyBGRf /fDypEqUg2LCi+L7dD1iQnXwg8KvFnvcSraiesiHeCn7USsHektlxl521BUxPxGDoy9o yNoDDOARflQEWTwwRCVjsNW4MD6grpdiD9vf0UtmvZ0aO4OLWtK0n09mtlHR4gD+m3z/ LUTKp67f5tIFu16l171tVcEhpj9dTuCpMUivdvltxdy0enzR8NcVjwQwJJ3+wZQnKdmu luKnTQVq+4mdD61qZyVp++4u5OQPS+C6knkE1pLj1aG6ZbR/8PuH8w7NAjkBtUP55mn7 7Zkg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=S0UK3HPK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o13-20020a170902d4cd00b00189b279b8dbsi40119890plg.390.2023.01.04.16.34.10; Wed, 04 Jan 2023 16:34:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=S0UK3HPK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240242AbjAEA0n (ORCPT + 99 others); Wed, 4 Jan 2023 19:26:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33356 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235458AbjAEA0F (ORCPT ); Wed, 4 Jan 2023 19:26:05 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D48934FD61; Wed, 4 Jan 2023 16:24:53 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id B4786B8167B; Thu, 5 Jan 2023 00:24:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40566C43392; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=f7VPeHyZLzEgBCJrtOOLYruhtWg4ipMr6+XcpqWU8pk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S0UK3HPKJw+YCNaT91b6RahkwPZdBiq81FkIwsG9KBREFG09g2/nmVPLGuRh9Slsy P+4TZMVDBBg/vGftfxnLkORAChuSVkc5ICBoEKxZrGgAQVIyHYP5LvO8brggTJuZCJ ktTBwbLdLudB0OkMS8YjPWg/R1L39dxxRVPC9Aa9GRSr6i2iOqnHDu/hhRrrY00qCP F9P96p8iaAWPrftcUINLWz1GevjGXlNDd7pRqQuB2mOtbmxH7YE37QtMsKkarxNaxL LeEi54QJk+Yd+5pStjMBYv7ODBiqtkfZuUYssslpRcmsuf/FJ2a3Rw1CUpSqoL42Nk eI+xcDj0wTyrg== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id E79AC5C1456; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 4/8] rcu/kvfree: Move need_offload_krc() out of krcp->lock Date: Wed, 4 Jan 2023 16:24:44 -0800 Message-Id: <20230105002448.1768892-4-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140627297295820?= X-GMAIL-MSGID: =?utf-8?q?1754140627297295820?= From: "Uladzislau Rezki (Sony)" The need_offload_krc() function currently holds the krcp->lock in order to safely check krcp->head. This commit removes the need for this lock in that function by updating the krcp->head pointer using WRITE_ONCE() macro so that readers can carry out lockless loads of that pointer. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 839e617f6c370..0c42fce4efe32 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3194,7 +3194,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // objects queued on the linked list. if (!krwp->head_free) { krwp->head_free = krcp->head; - krcp->head = NULL; + WRITE_ONCE(krcp->head, NULL); } WRITE_ONCE(krcp->count, 0); @@ -3208,6 +3208,8 @@ static void kfree_rcu_monitor(struct work_struct *work) } } + raw_spin_unlock_irqrestore(&krcp->lock, flags); + // If there is nothing to detach, it means that our job is // successfully done here. In case of having at least one // of the channels that is still busy we should rearm the @@ -3215,8 +3217,6 @@ static void kfree_rcu_monitor(struct work_struct *work) // still in progress. if (need_offload_krc(krcp)) schedule_delayed_monitor_work(krcp); - - raw_spin_unlock_irqrestore(&krcp->lock, flags); } static enum hrtimer_restart @@ -3386,7 +3386,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) head->func = ptr; head->next = krcp->head; - krcp->head = head; + WRITE_ONCE(krcp->head, head); success = true; } @@ -3463,15 +3463,12 @@ static struct shrinker kfree_rcu_shrinker = { void __init kfree_rcu_scheduler_running(void) { int cpu; - unsigned long flags; for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - raw_spin_lock_irqsave(&krcp->lock, flags); if (need_offload_krc(krcp)) schedule_delayed_monitor_work(krcp); - raw_spin_unlock_irqrestore(&krcp->lock, flags); } } From patchwork Thu Jan 5 00:24:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39190 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp41760wrt; Wed, 4 Jan 2023 16:29:57 -0800 (PST) X-Google-Smtp-Source: AMrXdXsWx5GeUfqNp3RkKgfi2qogQr98OfQVYbZBvmTIieQS94KipryL9rEmu26nRVGm2i3F24Bn X-Received: by 2002:a17:903:1d0:b0:192:4f32:3ba7 with SMTP id e16-20020a17090301d000b001924f323ba7mr62697904plh.18.1672878597162; Wed, 04 Jan 2023 16:29:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878597; cv=none; d=google.com; s=arc-20160816; b=MBYzpBw0SryPxfG++Hh5wmt3YiJpRzUxBVWySXQYtfESOaW8MZK9+T1T4VgKOljIqF qzeocxs3I5eT2bRdvQ+J4lTgZtAet2sgGdeN7h6NQk/BkHVkq1gqnweqkO+UY7W6fJGn o69e+jIrhbTsU7ugcA4PJfimreHtQXpVJC20EnkQ19/ujWyp0jC1esMD+I97kuL1C9rG 7qiXj4XzUcxXW2fkmwCt4HqHwBXQRL6w69P37oEBzKuDhrjk2Z0+e9t0icfqFv0Fqs7N 0xqokMJeBf7mihYB+FuhW4YGijiTUc8VZJcBlECD3jsUfjJubVZIcZPrRnVKhOUKvTVE v3pg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/Xh0BSqdoE8IerG/YnQw+GiAxLIlKLFcj1I58vKwO74=; b=c+tp4VrGNNR0vspAGkh3wbXisTtjpj1Its0x2xKaq98Aa0IEvxe+IC5Zjm0f1uKfp+ Wz337NcCw0X49shk+vCTDjzGsK7OXl4w9+yjmvLoOth4PD70RcRfrXoFDb1grIfmD1Gk M1/CWDe2zQQNmFdW5+eFznRHiXXkQzJ9hZPtF9Hw+xT9SQcsoXWQkSRlTJ4yhC3lrteu 3AH8ItYqRmzsaTdE6d/kK0vX7J2+naPv7+RAxS/j5FCK9CLBOZhCcqKcM1MZzf4qdY7Q t5bG4qQf4R/x6GBhpEttEB2idDcql+AWgaavjYCwrLAUYTkZvgq2m4zkvVctEPSETMlk LESA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QCQBIAXb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p4-20020a170902e74400b001897e34ac5dsi39085095plf.289.2023.01.04.16.29.44; Wed, 04 Jan 2023 16:29:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=QCQBIAXb; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240369AbjAEA0t (ORCPT + 99 others); Wed, 4 Jan 2023 19:26:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33380 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235517AbjAEA0I (ORCPT ); Wed, 4 Jan 2023 19:26:08 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A820850F51; Wed, 4 Jan 2023 16:24:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id E2DF26188D; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D264C433F0; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=LCE+SHgExBVrEz2EAshUB2ptHQamWCte/NKiDUBtId4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QCQBIAXbEoDtbjF0OqnJDoNR0jiYtqCZCEyLTiGY1uD2OGAmEYKRbyPsUJybLSfC8 rOlvkjRFFZKp/rW7wvfFTBKCdAnXyktk3VJXmMmTyHUpywkPevc0SRdPTLuOnWTj1J 6I0rv/ORP1485Nvwutltd6V6SPsrSEyToCDE9RgEP6CCPSiWavLI3zl4UshUUecxoY oUD98NwCIYScSC0/faMcOy65cLDgUkdxKByqyvXl5SDV9cOYzFjyHPDZNjMq//6ntP hrRU/C2S8NCE52NOOWDNAvaMAqqiv3egZF/LSnZCG+vIdr9KUnUfS1BOOZbg9rQXT0 Egen0kTu+23sw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id EA3D15C149B; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 5/8] rcu/kvfree: Use a polled API to speedup a reclaim process Date: Wed, 4 Jan 2023 16:24:45 -0800 Message-Id: <20230105002448.1768892-5-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140347902964267?= X-GMAIL-MSGID: =?utf-8?q?1754140347902964267?= From: "Uladzislau Rezki (Sony)" Currently all objects placed into a batch wait for a full grace period to elapse after that batch is ready to send to RCU. However, this can unnecessarily delay freeing of the first objects that were added to the batch. After all, several RCU grace periods might have elapsed since those objects were added, and if so, there is no point in further deferring their freeing. This commit therefore adds per-page grace-period snapshots which are obtained from get_state_synchronize_rcu(). When the batch is ready to be passed to call_rcu(), each page's snapshot is checked by passing it to poll_state_synchronize_rcu(). If a given page's RCU grace period has already elapsed, its objects are freed immediately by kvfree_rcu_bulk(). Otherwise, these objects are freed after a call to synchronize_rcu(). This approach requires that the pages be traversed in reverse order, that is, the oldest ones first. Test example: kvm.sh --memory 10G --torture rcuscale --allcpus --duration 1 \ --kconfig CONFIG_NR_CPUS=64 \ --kconfig CONFIG_RCU_NOCB_CPU=y \ --kconfig CONFIG_RCU_NOCB_CPU_DEFAULT_ALL=y \ --kconfig CONFIG_RCU_LAZY=n \ --bootargs "rcuscale.kfree_rcu_test=1 rcuscale.kfree_nthreads=16 \ rcuscale.holdoff=20 rcuscale.kfree_loops=10000 \ torture.disable_onoff_at_boot" --trust-make Before this commit: Total time taken by all kfree'ers: 8535693700 ns, loops: 10000, batches: 1188, memory footprint: 2248MB Total time taken by all kfree'ers: 8466933582 ns, loops: 10000, batches: 1157, memory footprint: 2820MB Total time taken by all kfree'ers: 5375602446 ns, loops: 10000, batches: 1130, memory footprint: 6502MB Total time taken by all kfree'ers: 7523283832 ns, loops: 10000, batches: 1006, memory footprint: 3343MB Total time taken by all kfree'ers: 6459171956 ns, loops: 10000, batches: 1150, memory footprint: 6549MB After this commit: Total time taken by all kfree'ers: 8560060176 ns, loops: 10000, batches: 1787, memory footprint: 61MB Total time taken by all kfree'ers: 8573885501 ns, loops: 10000, batches: 1777, memory footprint: 93MB Total time taken by all kfree'ers: 8320000202 ns, loops: 10000, batches: 1727, memory footprint: 66MB Total time taken by all kfree'ers: 8552718794 ns, loops: 10000, batches: 1790, memory footprint: 75MB Total time taken by all kfree'ers: 8601368792 ns, loops: 10000, batches: 1724, memory footprint: 62MB The reduction in memory footprint is well in excess of an order of magnitude. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 47 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 8 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 0c42fce4efe32..735312f78e980 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2877,11 +2877,13 @@ EXPORT_SYMBOL_GPL(call_rcu); /** * struct kvfree_rcu_bulk_data - single block to store kvfree_rcu() pointers * @list: List node. All blocks are linked between each other + * @gp_snap: Snapshot of RCU state for objects placed to this bulk * @nr_records: Number of active pointers in the array * @records: Array of the kvfree_rcu() pointers */ struct kvfree_rcu_bulk_data { struct list_head list; + unsigned long gp_snap; unsigned long nr_records; void *records[]; }; @@ -2898,13 +2900,15 @@ struct kvfree_rcu_bulk_data { * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period + * @head_free_gp_snap: Snapshot of RCU state for objects placed to "@head_free" * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ struct kfree_rcu_cpu_work { - struct rcu_work rcu_work; + struct work_struct rcu_work; struct rcu_head *head_free; + unsigned long head_free_gp_snap; struct list_head bulk_head_free[FREE_N_CHANNELS]; struct kfree_rcu_cpu *krcp; }; @@ -3100,10 +3104,11 @@ static void kfree_rcu_work(struct work_struct *work) struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; + unsigned long head_free_gp_snap; int i; - krwp = container_of(to_rcu_work(work), - struct kfree_rcu_cpu_work, rcu_work); + krwp = container_of(work, + struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; raw_spin_lock_irqsave(&krcp->lock, flags); @@ -3114,12 +3119,29 @@ static void kfree_rcu_work(struct work_struct *work) // Channel 3. head = krwp->head_free; krwp->head_free = NULL; + head_free_gp_snap = krwp->head_free_gp_snap; raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. - for (i = 0; i < FREE_N_CHANNELS; i++) + for (i = 0; i < FREE_N_CHANNELS; i++) { + // Start from the tail page, so a GP is likely passed for it. + list_for_each_entry_safe_reverse(bnode, n, &bulk_head[i], list) { + // Not yet ready? Bail out since we need one more GP. + if (!poll_state_synchronize_rcu(bnode->gp_snap)) + break; + + list_del_init(&bnode->list); + kvfree_rcu_bulk(krcp, bnode, i); + } + + // Please note a request for one more extra GP can + // occur only once for all objects in this batch. + if (!list_empty(&bulk_head[i])) + synchronize_rcu(); + list_for_each_entry_safe(bnode, n, &bulk_head[i], list) kvfree_rcu_bulk(krcp, bnode, i); + } /* * This is used when the "bulk" path can not be used for the @@ -3128,7 +3150,10 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - kvfree_rcu_list(head); + if (head) { + cond_synchronize_rcu(head_free_gp_snap); + kvfree_rcu_list(head); + } } static bool @@ -3195,6 +3220,11 @@ static void kfree_rcu_monitor(struct work_struct *work) if (!krwp->head_free) { krwp->head_free = krcp->head; WRITE_ONCE(krcp->head, NULL); + + // Take a snapshot for this krwp. Please note no more + // any objects can be added to attached head_free channel + // therefore fixate a GP for it here. + krwp->head_free_gp_snap = get_state_synchronize_rcu(); } WRITE_ONCE(krcp->count, 0); @@ -3204,7 +3234,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // be that the work is in the pending state when // channels have been detached following by each // other. - queue_rcu_work(system_wq, &krwp->rcu_work); + queue_work(system_wq, &krwp->rcu_work); } } @@ -3332,8 +3362,9 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, list_add(&bnode->list, &(*krcp)->bulk_head[idx]); } - /* Finally insert. */ + // Finally insert and update the GP for this page. bnode->records[bnode->nr_records++] = ptr; + bnode->gp_snap = get_state_synchronize_rcu(); return true; } @@ -4783,7 +4814,7 @@ static void __init kfree_rcu_batch_init(void) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); for (i = 0; i < KFREE_N_BATCHES; i++) { - INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); + INIT_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; for (j = 0; j < FREE_N_CHANNELS; j++) From patchwork Thu Jan 5 00:24:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39189 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp41570wrt; Wed, 4 Jan 2023 16:29:20 -0800 (PST) X-Google-Smtp-Source: AMrXdXsm8/Ccjcartlt9LcZh6dzhg7UJGuvZC7xa3mHpEmPkoKAU4uHMsd+oLZpusT5ywkDTASer X-Received: by 2002:a62:be17:0:b0:582:b089:d9be with SMTP id l23-20020a62be17000000b00582b089d9bemr8089416pff.13.1672878559788; Wed, 04 Jan 2023 16:29:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878559; cv=none; d=google.com; s=arc-20160816; b=PJ2yaprIn9H+aU5F49pdC4ZVMlSEzou7z3IlxKl/J/N0ZFw20mHvjjnUxANTFxE0TV nXXUw3qWl8DAP7qmnixNCQejD3YGNVQNWYA/VzKWMlpWujs60gHP0upedh8fkhEeoU9p gqRwYy7ecuM5Mc4iXuocYUC/2FRPQu913sq7SWCyLybMISIjQbOECagvFQw2OSZrs58W aiG5WCobG8zZ+HjRRLIxZ56D0LAP7o55WJmblWa5l8kIcdUPuR8eEcdh7qkTmGS78yVv m78Kki3xSFeUQiasmTURmLfdQsPQc+ufL/agPSN1s9rxtZTDLEdUayqXwqGYReJJ9iiR H9Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=o9RAK4/a7/YeYKhwS7men3V1f+K2eZ54fYve87dRokM=; b=TRCJa2r8wO4COF4G4EY4wuWmhYkukXQfTDPzEidBQiADn1+hkZJUNzinKv1Dh7qiS0 AeyKOTRFOGWukFCcNdLnWr/jqozZhQp4EoKkAm0cElUCgXEcqkPtVkxjXPMyVKhNTsUC /z+NbKHnZE2Y9rP2fgvtHOcZwQTRJ9ywuwyKP7OUqyaVNwK0ykNRBsPVPbtUFyiECClU U3M1trtw/GARgZBCU7zgtiexS7JqAUKBkCVckQgU/J4zm7M3ie9DjCwMq54evIUyYaDi Bq57UZk7cwM/r4e54Kw0iCijGcuoVEgHevb0B9Pi2ftyqKKbqqzzFcQMV43iZqxkeM7P 7FKw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=POh+aTc0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y4-20020a056a00190400b0057fdee40d48si37421113pfi.173.2023.01.04.16.29.07; Wed, 04 Jan 2023 16:29:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=POh+aTc0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240446AbjAEA0w (ORCPT + 99 others); Wed, 4 Jan 2023 19:26:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240385AbjAEA0S (ORCPT ); Wed, 4 Jan 2023 19:26:18 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECBA84A976; Wed, 4 Jan 2023 16:25:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6BAF961892; Thu, 5 Jan 2023 00:24:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9609EC433A0; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=yDaJ2kiBgiBxalJElJRwgXxKwM8lnJrHfQVXCUrdp/o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=POh+aTc0KBd14nDPvnvVh+qNPHOgWKCdws+JM7+YA9e5ldb2YMcBdkaJeuHrcaOu+ b02yUkMtL/6uKXhJeKJ+6ZTiMtptY6LdXJqwRtAC4q9kFBsX/AylOdXq4kBiw+At9W DdjIf19BQcep8goojS2CIhVzRCMpl4TxPJDOcxm5SkqkHyTAn4BpmQWUrwKnRJDii1 qiNaOwdB7LpnXAWmLEL/HfTxNAC8y2cyDdf5DWlckjcQFpVgLYn8QqpGr9PjYCcxVX JCetQLtt1i2Aqm5whmgV/dA/Ico7yfOv8D+zJGQTQGWg0M+GrrixlUTbIaaHeTW9rA hVW4WBSiot9Qw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id ECD5E5C1ADF; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 6/8] rcu/kvfree: Use READ_ONCE() when access to krcp->head Date: Wed, 4 Jan 2023 16:24:46 -0800 Message-Id: <20230105002448.1768892-6-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140308823860112?= X-GMAIL-MSGID: =?utf-8?q?1754140308823860112?= From: "Uladzislau Rezki (Sony)" The need_offload_krc() function is now lock-free, which gives the compiler freedom to load old values from plain C-language loads from the kfree_rcu_cpu struture's ->head pointer. This commit therefore applied READ_ONCE() to these loads. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 735312f78e980..02551e0e11328 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3165,7 +3165,7 @@ need_offload_krc(struct kfree_rcu_cpu *krcp) if (!list_empty(&krcp->bulk_head[i])) return true; - return !!krcp->head; + return !!READ_ONCE(krcp->head); } static void @@ -3206,7 +3206,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // in that case the monitor work is rearmed. if ((!list_empty(&krcp->bulk_head[0]) && list_empty(&krwp->bulk_head_free[0])) || (!list_empty(&krcp->bulk_head[1]) && list_empty(&krwp->bulk_head_free[1])) || - (krcp->head && !krwp->head_free)) { + (READ_ONCE(krcp->head) && !krwp->head_free)) { // Channel 1 corresponds to the SLAB-pointer bulk path. // Channel 2 corresponds to vmalloc-pointer bulk path. From patchwork Thu Jan 5 00:24:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39192 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp42886wrt; Wed, 4 Jan 2023 16:33:02 -0800 (PST) X-Google-Smtp-Source: AMrXdXsTs+xvDbE9N+w4sEzkqzo4avdCrKbrTVWcvEYgXtFD0OyevzrhUhdaKyk6gdBkoENhl+3z X-Received: by 2002:a17:902:e54b:b0:191:24d1:8af6 with SMTP id n11-20020a170902e54b00b0019124d18af6mr76528301plf.42.1672878782269; Wed, 04 Jan 2023 16:33:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878782; cv=none; d=google.com; s=arc-20160816; b=Otii5ROXLbwqtiAAf8RD+wCkEcrBW/ClH8PmU4OKaG4Yg/O9IOzpsPInvzFX50+W2l KZJaidH/8nNkXX6APOB59wHVbvAT/TjISQHHao4O/qNeNuIDoQBqub3yfv7zs3bAO6lc pJR7w51OwXddns6j1kEhodnLJmKZmHuJbrNolB3+TM0QAwn4XxTTmmgdp+nyby6N8FQN TtU9IsiKrUREiwE0/53WgxvOfRTjeBsUztmMgTTuf5eug/2cIrcK6hvDBSgxHcuSrmpS P/v3roi5tj2cQd8NX1uSJndaJJQ8eNErwjmvA0sfgD7y5b+14pWUj02g4SxsIa7SlcJN 4tsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=nFQzSqdWDnaYbJUx/lQp9RjraRWIyLm25BPM0IaA2lk=; b=DCCliVtl2wxQ6XomIpi7Tvi8DXIQMd8UIAAn3b+lbtIYHN6ZXwIORhXWCmjvBjKbUP 33NRLufkk2RlE3sSxcDPr1K5Na3aILqFB86KO65EJTMKhAHITDtkAjW7aa1WlU675TdC +65JSaby8K09GGp9C01uZwyhSFEN1/6AZycwfQaAWCGmsYOFscHiYVbKL5iPrB3Z3FDj QCk9dtCMGuwBVuP759wm8VLSFCQg1ZIBCWhmFM1mFUFzYM1levVI1Q2+dNxVqIxPdeU/ ktrl0iIn9sX5AwgVjl3OEFqCD0nbKkKaL9jawqaT6uBXZRSlrTeiTMe1dLTxkwBVA2IM HjBQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=vL1tIQBR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x23-20020a170902b41700b001781e393237si33877229plr.443.2023.01.04.16.32.49; Wed, 04 Jan 2023 16:33:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=vL1tIQBR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235787AbjAEA1K (ORCPT + 99 others); Wed, 4 Jan 2023 19:27:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240494AbjAEA0U (ORCPT ); Wed, 4 Jan 2023 19:26:20 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 600E84A941; Wed, 4 Jan 2023 16:25:14 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7A1766189B; Thu, 5 Jan 2023 00:24:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 961D7C433A1; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=d7YTGhFTM4KzsgpxZTGGeZQR3ZpgDoXNIpuouzZiz84=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vL1tIQBRmDEAMoVcwK2B1dk52NSs4R9n3UydE1soj3iDSaSQ2DgdBE/PWt5IOQN80 ZbyurBktwUn/Zh6WnKQKv9a6frTwR18i4tbZ3OmPHG/yxfdGmY1foCRz3hZe3tlUY5 rztozHaUPWvILIwcm9lanovhKr8JUsBOIAE5bm2TrYo3Esz+sePKgV3Ft9AQKVqyMl 6FooEJFGp1UQe+3XPPCZe0dnOKWHEhAEzZzcsY7HOrhkLITx7/J0RXRdMBeXLz5s0U M1kEGIIj33p9uRKpibrds/LHmGTskCJE3ftNyir5I1BVZuIdUOe3FjjNu9tEaEbOlr glMnhsHN5HcsA== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id EF0FF5C1AE0; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 7/8] rcu/kvfree: Carefully reset number of objects in krcp Date: Wed, 4 Jan 2023 16:24:47 -0800 Message-Id: <20230105002448.1768892-7-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140542037357513?= X-GMAIL-MSGID: =?utf-8?q?1754140542037357513?= From: "Uladzislau Rezki (Sony)" The schedule_delayed_monitor_work() function relies on the count of objects queued into any given kfree_rcu_cpu structure. This count is used to determine how quickly to schedule passing these objects to RCU. There are three pipes where pointers can be placed. When any pipe is offloaded, the kfree_rcu_cpu structure's ->count counter is set to zero, which is wrong because the other pipes might still be non-empty. This commit therefore maintains per-pipe counters, and introduces a krc_count() helper to access the aggregate value of those counters. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 40 ++++++++++++++++++++++++++++++---------- 1 file changed, 30 insertions(+), 10 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 02551e0e11328..52f4c7e87f88e 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2921,7 +2921,8 @@ struct kfree_rcu_cpu_work { * @lock: Synchronize access to this structure * @monitor_work: Promote @head to @head_free after KFREE_DRAIN_JIFFIES * @initialized: The @rcu_work fields have been initialized - * @count: Number of objects for which GP not started + * @head_count: Number of objects in rcu_head singular list + * @bulk_count: Number of objects in bulk-list * @bkvcache: * A simple cache list that contains objects for reuse purpose. * In order to save some per-cpu space the list is singular. @@ -2939,13 +2940,19 @@ struct kfree_rcu_cpu_work { * the interactions with the slab allocators. */ struct kfree_rcu_cpu { + // Objects queued on a linked list + // through their rcu_head structures. struct rcu_head *head; + atomic_t head_count; + + // Objects queued on a bulk-list. struct list_head bulk_head[FREE_N_CHANNELS]; + atomic_t bulk_count[FREE_N_CHANNELS]; + struct kfree_rcu_cpu_work krw_arr[KFREE_N_BATCHES]; raw_spinlock_t lock; struct delayed_work monitor_work; bool initialized; - int count; struct delayed_work page_cache_work; atomic_t backoff_page_cache_fill; @@ -3168,12 +3175,23 @@ need_offload_krc(struct kfree_rcu_cpu *krcp) return !!READ_ONCE(krcp->head); } +static int krc_count(struct kfree_rcu_cpu *krcp) +{ + int sum = atomic_read(&krcp->head_count); + int i; + + for (i = 0; i < FREE_N_CHANNELS; i++) + sum += atomic_read(&krcp->bulk_count[i]); + + return sum; +} + static void schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) { long delay, delay_left; - delay = READ_ONCE(krcp->count) >= KVFREE_BULK_MAX_ENTR ? 1:KFREE_DRAIN_JIFFIES; + delay = krc_count(krcp) >= KVFREE_BULK_MAX_ENTR ? 1:KFREE_DRAIN_JIFFIES; if (delayed_work_pending(&krcp->monitor_work)) { delay_left = krcp->monitor_work.timer.expires - jiffies; if (delay < delay_left) @@ -3211,8 +3229,10 @@ static void kfree_rcu_monitor(struct work_struct *work) // Channel 1 corresponds to the SLAB-pointer bulk path. // Channel 2 corresponds to vmalloc-pointer bulk path. for (j = 0; j < FREE_N_CHANNELS; j++) { - if (list_empty(&krwp->bulk_head_free[j])) + if (list_empty(&krwp->bulk_head_free[j])) { list_replace_init(&krcp->bulk_head[j], &krwp->bulk_head_free[j]); + atomic_set(&krcp->bulk_count[j], 0); + } } // Channel 3 corresponds to both SLAB and vmalloc @@ -3220,6 +3240,7 @@ static void kfree_rcu_monitor(struct work_struct *work) if (!krwp->head_free) { krwp->head_free = krcp->head; WRITE_ONCE(krcp->head, NULL); + atomic_set(&krcp->head_count, 0); // Take a snapshot for this krwp. Please note no more // any objects can be added to attached head_free channel @@ -3227,8 +3248,6 @@ static void kfree_rcu_monitor(struct work_struct *work) krwp->head_free_gp_snap = get_state_synchronize_rcu(); } - WRITE_ONCE(krcp->count, 0); - // One work is per one batch, so there are three // "free channels", the batch can handle. It can // be that the work is in the pending state when @@ -3365,6 +3384,8 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, // Finally insert and update the GP for this page. bnode->records[bnode->nr_records++] = ptr; bnode->gp_snap = get_state_synchronize_rcu(); + atomic_inc(&(*krcp)->bulk_count[idx]); + return true; } @@ -3418,11 +3439,10 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) head->func = ptr; head->next = krcp->head; WRITE_ONCE(krcp->head, head); + atomic_inc(&krcp->head_count); success = true; } - WRITE_ONCE(krcp->count, krcp->count + 1); - // Set timer to drain after KFREE_DRAIN_JIFFIES. if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING) schedule_delayed_monitor_work(krcp); @@ -3453,7 +3473,7 @@ kfree_rcu_shrink_count(struct shrinker *shrink, struct shrink_control *sc) for_each_possible_cpu(cpu) { struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - count += READ_ONCE(krcp->count); + count += krc_count(krcp); count += READ_ONCE(krcp->nr_bkv_objs); atomic_set(&krcp->backoff_page_cache_fill, 1); } @@ -3470,7 +3490,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) int count; struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); - count = krcp->count; + count = krc_count(krcp); count += drain_page_cache(krcp); kfree_rcu_monitor(&krcp->monitor_work.work); From patchwork Thu Jan 5 00:24:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 39194 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:4e01:0:0:0:0:0 with SMTP id p1csp43050wrt; Wed, 4 Jan 2023 16:33:28 -0800 (PST) X-Google-Smtp-Source: AMrXdXsq+UeDNc2tPLhTJDQ8sRcBkFt1AKbLYtNZ/koPM2O65FDXoeuidiwmL3+rISH8SOGEQ4BY X-Received: by 2002:a17:902:d491:b0:192:b40b:e41 with SMTP id c17-20020a170902d49100b00192b40b0e41mr30042531plg.61.1672878808072; Wed, 04 Jan 2023 16:33:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672878808; cv=none; d=google.com; s=arc-20160816; b=eRzpaaucsIpqXJUOqnXwIrIGYCVgYD20uNtt5isPg3LIFi5kHw4+b95atgkZmLVK60 4Rl04/5tWoHKY5rXrgjoq0eWXwQwgA1oq8biHjCXMvu7M2ubEqLD0SNpik1hrZQke7ea 4ou0rybx67ZgOaWFISMsHtnwhqKoEQOM88WEAgerT+Go2dMPnc7UGDbY7R+pzSu9rMNx lVfbkYez/U8wslLsYPd51qydXSUsRzRUY+6ay6aJ26+v84oswLhJMr5cNwNI5W20tzKL JPK7IUIPSJN6E2JqyiIjZpYPWOvuHnIDCLmeqiLqXTyFzkozu7wv6nKFSt6gMYOkOM5E S5mQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ENIicWJJLsDKM3KyZoJPvr4s9Pqi4OsA4atjmeNRhvQ=; b=W1/RdUleQ2DgASFNOlJ9DgWHaQLxLoyXeCkmI3vW1QIWLY1E0WMr9/zjhzMVpTw2AN Hmuz1+isM85sdPndnPT8xazXFAuU5xgNu72xZ8VMQ5NgCA2lWq7SaHtGD6bB4DhhX3jz qarjVEYhFSR24xiDWAkLGtGpFGxzM96RjWxk1q0xEWBPQoJSLKtdNLqGkcMabCoxkC0g Zwx0RpxURgxaDKSF9X34W/YFqk/urCz11tWgy3838n2/s9co7PSZF/K+my/886628J7c Si5x0LudurEB+iZdC2y6lXCmq4sLbB0VYfPGKeIiO7XR9XYmdtmPPJN9i3a8TTG4+cHk 4sdw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=khtpiFUC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g6-20020a636b06000000b00478870de520si36803466pgc.293.2023.01.04.16.33.15; Wed, 04 Jan 2023 16:33:28 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=khtpiFUC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235516AbjAEA1P (ORCPT + 99 others); Wed, 4 Jan 2023 19:27:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33868 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240434AbjAEA0T (ORCPT ); Wed, 4 Jan 2023 19:26:19 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ECB254A975; Wed, 4 Jan 2023 16:25:13 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6335A61888; Thu, 5 Jan 2023 00:24:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98CFAC4339E; Thu, 5 Jan 2023 00:24:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1672878290; bh=BPcSGuUJdzaOvgoBas81TEqlhrtXgfDZlFCPUYBEIeM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=khtpiFUCsXev16OPCqlsUBNEnj6Lbk/2xj3B7g6QBfsy+k/hcGlh3/aq+peF2VFI4 x1BLTaPBRRiUbFUO/gH0CrUH2jEXpJZubnQ872PN4Zzoz+arUrHDTyg/1Zc82bpDe5 4M7eY8ciQJZcWnCEc1pFQOfARFzK3uyne+4wOt5l4syPSNb6/awLd73hxFpACj6vjN QG7hmD6vewbXZN5jBgbo+O2ZMy8sv6q5yAdoIafoVypqLS2T9WxeWwu+T69mhsnUye W35SZGFYBr3BNDjG5bSJSs2qEJsrIzd8ZWjJxFVj7FweXxoAa8mLcXGUcZE3cB0huQ m3HcVX4K8XRWw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id F1C5C5C1C5B; Wed, 4 Jan 2023 16:24:49 -0800 (PST) From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org, "Uladzislau Rezki (Sony)" , "Paul E . McKenney" Subject: [PATCH rcu 8/8] rcu/kvfree: Split ready for reclaim objects from a batch Date: Wed, 4 Jan 2023 16:24:48 -0800 Message-Id: <20230105002448.1768892-8-paulmck@kernel.org> X-Mailer: git-send-email 2.31.1.189.g2e36527f23 In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1754140569026342159?= X-GMAIL-MSGID: =?utf-8?q?1754140569026342159?= From: "Uladzislau Rezki (Sony)" This patch splits the lists of objects so as to avoid sending any through RCU that have already been queued for more than one grace period. These long-term-resident objects are immediately freed. The remaining short-term-resident objects are queued for later freeing using queue_rcu_work(). This change avoids delaying workqueue handlers with synchronize_rcu() invocations. Yes, workqueue handlers are designed to handle blocking, but avoiding blocking when unnecessary improves performance during low-memory situations. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney --- kernel/rcu/tree.c | 87 +++++++++++++++++++++++++++++------------------ 1 file changed, 54 insertions(+), 33 deletions(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 52f4c7e87f88e..0b4f7dd551572 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2900,15 +2900,13 @@ struct kvfree_rcu_bulk_data { * struct kfree_rcu_cpu_work - single batch of kfree_rcu() requests * @rcu_work: Let queue_rcu_work() invoke workqueue handler after grace period * @head_free: List of kfree_rcu() objects waiting for a grace period - * @head_free_gp_snap: Snapshot of RCU state for objects placed to "@head_free" * @bulk_head_free: Bulk-List of kvfree_rcu() objects waiting for a grace period * @krcp: Pointer to @kfree_rcu_cpu structure */ struct kfree_rcu_cpu_work { - struct work_struct rcu_work; + struct rcu_work rcu_work; struct rcu_head *head_free; - unsigned long head_free_gp_snap; struct list_head bulk_head_free[FREE_N_CHANNELS]; struct kfree_rcu_cpu *krcp; }; @@ -2916,6 +2914,7 @@ struct kfree_rcu_cpu_work { /** * struct kfree_rcu_cpu - batch up kfree_rcu() requests for RCU grace period * @head: List of kfree_rcu() objects not yet waiting for a grace period + * @head_gp_snap: Snapshot of RCU state for objects placed to "@head" * @bulk_head: Bulk-List of kvfree_rcu() objects not yet waiting for a grace period * @krw_arr: Array of batches of kfree_rcu() objects waiting for a grace period * @lock: Synchronize access to this structure @@ -2943,6 +2942,7 @@ struct kfree_rcu_cpu { // Objects queued on a linked list // through their rcu_head structures. struct rcu_head *head; + unsigned long head_gp_snap; atomic_t head_count; // Objects queued on a bulk-list. @@ -3111,10 +3111,9 @@ static void kfree_rcu_work(struct work_struct *work) struct rcu_head *head; struct kfree_rcu_cpu *krcp; struct kfree_rcu_cpu_work *krwp; - unsigned long head_free_gp_snap; int i; - krwp = container_of(work, + krwp = container_of(to_rcu_work(work), struct kfree_rcu_cpu_work, rcu_work); krcp = krwp->krcp; @@ -3126,26 +3125,11 @@ static void kfree_rcu_work(struct work_struct *work) // Channel 3. head = krwp->head_free; krwp->head_free = NULL; - head_free_gp_snap = krwp->head_free_gp_snap; raw_spin_unlock_irqrestore(&krcp->lock, flags); // Handle the first two channels. for (i = 0; i < FREE_N_CHANNELS; i++) { // Start from the tail page, so a GP is likely passed for it. - list_for_each_entry_safe_reverse(bnode, n, &bulk_head[i], list) { - // Not yet ready? Bail out since we need one more GP. - if (!poll_state_synchronize_rcu(bnode->gp_snap)) - break; - - list_del_init(&bnode->list); - kvfree_rcu_bulk(krcp, bnode, i); - } - - // Please note a request for one more extra GP can - // occur only once for all objects in this batch. - if (!list_empty(&bulk_head[i])) - synchronize_rcu(); - list_for_each_entry_safe(bnode, n, &bulk_head[i], list) kvfree_rcu_bulk(krcp, bnode, i); } @@ -3157,10 +3141,7 @@ static void kfree_rcu_work(struct work_struct *work) * queued on a linked list through their rcu_head structures. * This list is named "Channel 3". */ - if (head) { - cond_synchronize_rcu(head_free_gp_snap); - kvfree_rcu_list(head); - } + kvfree_rcu_list(head); } static bool @@ -3201,6 +3182,44 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp) queue_delayed_work(system_wq, &krcp->monitor_work, delay); } +static void +kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp) +{ + struct list_head bulk_ready[FREE_N_CHANNELS]; + struct kvfree_rcu_bulk_data *bnode, *n; + struct rcu_head *head_ready = NULL; + unsigned long flags; + int i; + + raw_spin_lock_irqsave(&krcp->lock, flags); + for (i = 0; i < FREE_N_CHANNELS; i++) { + INIT_LIST_HEAD(&bulk_ready[i]); + + list_for_each_entry_safe_reverse(bnode, n, &krcp->bulk_head[i], list) { + if (!poll_state_synchronize_rcu(bnode->gp_snap)) + break; + + atomic_sub(bnode->nr_records, &krcp->bulk_count[i]); + list_move(&bnode->list, &bulk_ready[i]); + } + } + + if (krcp->head && poll_state_synchronize_rcu(krcp->head_gp_snap)) { + head_ready = krcp->head; + atomic_set(&krcp->head_count, 0); + WRITE_ONCE(krcp->head, NULL); + } + raw_spin_unlock_irqrestore(&krcp->lock, flags); + + for (i = 0; i < FREE_N_CHANNELS; i++) { + list_for_each_entry_safe(bnode, n, &bulk_ready[i], list) + kvfree_rcu_bulk(krcp, bnode, i); + } + + if (head_ready) + kvfree_rcu_list(head_ready); +} + /* * This function is invoked after the KFREE_DRAIN_JIFFIES timeout. */ @@ -3211,6 +3230,9 @@ static void kfree_rcu_monitor(struct work_struct *work) unsigned long flags; int i, j; + // Drain ready for reclaim. + kvfree_rcu_drain_ready(krcp); + raw_spin_lock_irqsave(&krcp->lock, flags); // Attempt to start a new batch. @@ -3230,8 +3252,9 @@ static void kfree_rcu_monitor(struct work_struct *work) // Channel 2 corresponds to vmalloc-pointer bulk path. for (j = 0; j < FREE_N_CHANNELS; j++) { if (list_empty(&krwp->bulk_head_free[j])) { - list_replace_init(&krcp->bulk_head[j], &krwp->bulk_head_free[j]); atomic_set(&krcp->bulk_count[j], 0); + list_replace_init(&krcp->bulk_head[j], + &krwp->bulk_head_free[j]); } } @@ -3239,13 +3262,8 @@ static void kfree_rcu_monitor(struct work_struct *work) // objects queued on the linked list. if (!krwp->head_free) { krwp->head_free = krcp->head; - WRITE_ONCE(krcp->head, NULL); atomic_set(&krcp->head_count, 0); - - // Take a snapshot for this krwp. Please note no more - // any objects can be added to attached head_free channel - // therefore fixate a GP for it here. - krwp->head_free_gp_snap = get_state_synchronize_rcu(); + WRITE_ONCE(krcp->head, NULL); } // One work is per one batch, so there are three @@ -3253,7 +3271,7 @@ static void kfree_rcu_monitor(struct work_struct *work) // be that the work is in the pending state when // channels have been detached following by each // other. - queue_work(system_wq, &krwp->rcu_work); + queue_rcu_work(system_wq, &krwp->rcu_work); } } @@ -3440,6 +3458,9 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) head->next = krcp->head; WRITE_ONCE(krcp->head, head); atomic_inc(&krcp->head_count); + + // Take a snapshot for this krcp. + krcp->head_gp_snap = get_state_synchronize_rcu(); success = true; } @@ -4834,7 +4855,7 @@ static void __init kfree_rcu_batch_init(void) struct kfree_rcu_cpu *krcp = per_cpu_ptr(&krc, cpu); for (i = 0; i < KFREE_N_BATCHES; i++) { - INIT_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); + INIT_RCU_WORK(&krcp->krw_arr[i].rcu_work, kfree_rcu_work); krcp->krw_arr[i].krcp = krcp; for (j = 0; j < FREE_N_CHANNELS; j++) From patchwork Fri Feb 3 01:43:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 52262 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:eb09:0:0:0:0:0 with SMTP id s9csp595311wrn; Thu, 2 Feb 2023 18:15:20 -0800 (PST) X-Google-Smtp-Source: AK7set9SqhlMMMN1noe8RaWawVwvHd7oyYfnDx1hULPyDsgXiVVNsMfhgn8IkPb9NVxy6zgzHlrQ X-Received: by 2002:a17:903:1c6:b0:195:e590:c7c6 with SMTP id e6-20020a17090301c600b00195e590c7c6mr10433235plh.22.1675390519916; Thu, 02 Feb 2023 18:15:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1675390519; cv=none; d=google.com; s=arc-20160816; b=JUn96kcneraPhHdOYkljOMA3EprNrs6t5jhzd1mQ5eIxLHrPRcEqi87LBIFQcyXXiv /d3jp/q9A0bUXr1Qiqv7+RQdDufzUheZWw2hurY16zrhzOw2c5FBXkxgAlrplU9TE2nP kejnRQrdzUI7xf1T7XWVj3buJ1OtdrmzicFDK8pUgEXn1TAeAMUvru8FsYI7AgGne/N5 98/Q9mSuy3ZIMCG47eirB37a5ZySUlzjqRFmlGnQPDrxPeJixMLnrGi4pmFyYDDRhBU2 eVrB0y+gjAK2qhNWieV3nYyEZyie/+XhngVQ1knBvgjyniAAP82zY2PxRU4CLL/S3tVc 4zLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date :dkim-signature; bh=qP5EdYNRs5JRsp9U6yQlbqD8timimux0Zb0npUBhmAE=; b=iSqMTE7i5F5m3cbKW3g66/xN5uKZB4ykwXBOa2D1xmMyakCKv5BFK8oVTJFr/WhgLO 8VnqCOEs85rDSZuhzKTPxWsGLlT23fv6WqocROgXpu0lFqZmmM4eYT3MrO02sMhcoTSd Zyqgr64s+mXn9pRZK8argvIBjWsPQPMvykFBrvMoyPZxNnUbxn72Ix4sID0jyXojYjcn 9SLh4rcxql79Eo+mEDsNQGzJ9WlBP7huinla5GBHhNC/d3U2xriQq+SiZ/BOplGuI+K+ k+oK60D0Z6zCPjrWpmk5tUgpeBBZPbTbA+oxhB5MQPjAfWX7y5sgaxfyGJwzkruWIpt/ 14qA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=sSzjqSKG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id jj6-20020a170903048600b001978e84753csi923888plb.267.2023.02.02.18.15.07; Thu, 02 Feb 2023 18:15:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=sSzjqSKG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232049AbjBCBnN (ORCPT + 99 others); Thu, 2 Feb 2023 20:43:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232050AbjBCBnK (ORCPT ); Thu, 2 Feb 2023 20:43:10 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76D0922A0E; Thu, 2 Feb 2023 17:43:07 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DFCE761D4B; Fri, 3 Feb 2023 01:43:06 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FA1BC433EF; Fri, 3 Feb 2023 01:43:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1675388586; bh=8zWI8bpAWD3oVlVykWtql5OSUtDYRERkD7toqAUb7R8=; h=Date:From:To:Cc:Subject:Reply-To:References:In-Reply-To:From; b=sSzjqSKG0c2v4pEaoyOyx0z3psnCdMrbABrkOsIQ7QR3fJYvsCR9bTr4H4GLesSt2 iFE4qI/QNPP83WUAsy6rsNZwOjk3r7pmuVDXSZ32g3gQGxHw/3bE5O0RJaLjpm17a4 UapoOYxj85CAZGh3VE5OuQOPT9oA9XvEUxW/AjzRNhRpLQq80yCu7yu4g+qgHI2dUo 5Mv7cjr1cMqFvM+/kton+7O9uFxzionWH7MdobojY0Gx97PiWCfwfqrmtwwJHfzHFM Ws2y3TFkDkt9vjP/M9/Du0L4a1lskYOIP/6w9kdurWRuldvah7clBRFP0HASL94ALN 15C9AAS1x6uzw== Received: by paulmck-ThinkPad-P17-Gen-1.home (Postfix, from userid 1000) id D608B5C0DE7; Thu, 2 Feb 2023 17:43:05 -0800 (PST) Date: Thu, 2 Feb 2023 17:43:05 -0800 From: "Paul E. McKenney" To: rcu@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@meta.com, rostedt@goodmis.org Subject: [PATCH rcu 9/8] Add kvfree_rcu_mightsleep() and kfree_rcu_mightsleep() Message-ID: <20230203014305.GA1075064@paulmck-ThinkPad-P17-Gen-1> Reply-To: paulmck@kernel.org References: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20230105002441.GA1768817@paulmck-ThinkPad-P17-Gen-1> X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1756774290194538660?= X-GMAIL-MSGID: =?utf-8?q?1756774290194538660?= The kvfree_rcu() and kfree_rcu() APIs are hazardous in that if you forget the second argument, it works, but might sleep. This sleeping can be a correctness bug from atomic contexts, and even in non-atomic contexts it might introduce unacceptable latencies. This commit therefore adds kvfree_rcu_mightsleep() and kfree_rcu_mightsleep(), which will replace the single-argument kvfree_rcu() and kfree_rcu(), respectively. This commit enables a series of commits that switch from single-argument kvfree_rcu() and kfree_rcu() to their _mightsleep() counterparts. Once all of these commits land, the single-argument versions will be removed. Signed-off-by: Uladzislau Rezki (Sony) Signed-off-by: Paul E. McKenney diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h index f38d4469d7f30..84433600885a6 100644 --- a/include/linux/rcupdate.h +++ b/include/linux/rcupdate.h @@ -1004,6 +1004,9 @@ static inline notrace void rcu_read_unlock_sched_notrace(void) #define kvfree_rcu(...) KVFREE_GET_MACRO(__VA_ARGS__, \ kvfree_rcu_arg_2, kvfree_rcu_arg_1)(__VA_ARGS__) +#define kvfree_rcu_mightsleep(ptr) kvfree_rcu_arg_1(ptr) +#define kfree_rcu_mightsleep(ptr) kvfree_rcu_mightsleep(ptr) + #define KVFREE_GET_MACRO(_1, _2, NAME, ...) NAME #define kvfree_rcu_arg_2(ptr, rhf) \ do { \