From patchwork Wed Aug 2 13:09:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 129842 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f41:0:b0:3e4:2afc:c1 with SMTP id v1csp466425vqx; Wed, 2 Aug 2023 06:43:51 -0700 (PDT) X-Google-Smtp-Source: APBJJlEhmYyWJirJtNV1IBcYpZT2e9zesUFh0PmJqDd44Zj4J9UmGTlEfYX9upM5fpEK8YkkD96O X-Received: by 2002:a17:902:e80f:b0:1bc:2d43:c74d with SMTP id u15-20020a170902e80f00b001bc2d43c74dmr4835215plg.34.1690983831409; Wed, 02 Aug 2023 06:43:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690983831; cv=none; d=google.com; s=arc-20160816; b=bgLgM7fs50uiObGMlxtC5fKvJTlv6fBHNz7QytHeh0paazBuuq2NprGp7fONXhBKAw B2W4KygZNVmyU+X9iVTvC3chZ65gbHrBuEbQmZoC4hTqsV3AXs0U3Yo3vW1zCHF9UNIp kjGVNHUacE5noLhyJZKDOtAlnQ73U41Vc9pOgDYJ49xrDZ9O7atNhtYX2SyrKfTAPxUm KCD5KSrupsHJQSLL5EO0z1hlI/X0VxSmpNMsLBtQWPOFOVs4nq8hHRMkdHlpE4xZWXQb SFyVl5Mg9tNjOAqYnTnVH42CPRjDL2ppk6hDEeXWnJPgvIihFCXuW3TSur1rldhtYcFv Y2ag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=0nk7bUtAc02hcxrxal931rzQcd2gznHcWA7gc6Db6c4=; fh=ixpq6thPVOOZuciljZRZ5gp6/ROsGXjbN7hNRM43S+c=; b=sUu3VEn3ZqeXZMMHU1LF1FS7asA9/P3IdrhNKntwRV6cAFb4tuQupHzkkeQRrIjxDG L0rAv75NS4UvNn+4SlMtciQ7CkatDuz/R74Fwz29GNuZArxcxMG6EU8cMaH1UiQANwSs 4BYubPkruH14pyCsNji8LLZ/6oeW8zaBkHjzmuKo09ObK1fAjq5KowB+bA5Ap7I0A9F+ aBIo/g+EHCW9hpIrum1mRdZ4Dnm8DlRXU4UUzAnLRv5y2PICgfKX7eqRznnOYpM7XyLB Ps2RedrXOgLh1ZP/a9REke2yE36netsH2OYwvPiEB/qjVHwQFsNJ5ow4xMnh6V8T9r+w /GmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n7-20020a170903110700b001b9ea0f0e8esi6976618plh.650.2023.08.02.06.43.37; Wed, 02 Aug 2023 06:43:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232699AbjHBNJe (ORCPT + 99 others); Wed, 2 Aug 2023 09:09:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232148AbjHBNJd (ORCPT ); Wed, 2 Aug 2023 09:09:33 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7952DF; Wed, 2 Aug 2023 06:09:30 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RGC464ggCz4f3kK2; Wed, 2 Aug 2023 21:09:26 +0800 (CST) Received: from huaweicloud.com (unknown [10.174.178.55]) by APP4 (Coremail) with SMTP id gCh0CgAHvbCBVcpkDKSVPQ--.59231S5; Wed, 02 Aug 2023 21:09:26 +0800 (CST) From: thunder.leizhen@huaweicloud.com To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, "Paul E . McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , rcu@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Zhen Lei Subject: [PATCH v4 1/2] mm: Remove kmem_valid_obj() Date: Wed, 2 Aug 2023 21:09:17 +0800 Message-Id: <20230802130918.1132-2-thunder.leizhen@huaweicloud.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20230802130918.1132-1-thunder.leizhen@huaweicloud.com> References: <20230802130918.1132-1-thunder.leizhen@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHvbCBVcpkDKSVPQ--.59231S5 X-Coremail-Antispam: 1UD129KBjvJXoWxZr1ftry5Jr4fXr1fuFW5GFg_yoWrAF13pw nxGr9xJrWkJr17GrsrJF1kury5Zr4kuF17Ca9aqw18Ar1UXrs7ur1kG3s2qF98JFW8X3W0 ya1vkF43uryUArDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBEb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r1S6rWUM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUGw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lw4CEc2x0rVAKj4 xxMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_ Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x 0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8 JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIx AIcVC2z280aVCY1x0267AKxVW8Jr0_Cr1UYxBIdaVFxhVjvjDU0xZFpf9x07Ut8n5UUUUU = X-CM-SenderInfo: hwkx0vthuozvpl2kv046kxt4xhlfz01xgou0bp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773125061707536106 X-GMAIL-MSGID: 1773125061707536106 From: Zhen Lei Function kmem_dump_obj() will splat if passed a pointer to a non-slab object. So no one will call it directly. It is always necessary to call kmem_valid_obj() first to determine whether the passed pointer to a valid slab object. Then merging kmem_valid_obj() into kmem_dump_obj() will make the code more concise. So convert kmem_dump_obj() to work the same way as vmalloc_dump_obj(). After this, no one calls kmem_valid_obj() anymore, and it can be safely removed. Suggested-by: Matthew Wilcox Signed-off-by: Zhen Lei Reviewed-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka --- include/linux/slab.h | 5 +++-- mm/slab_common.c | 41 +++++++++++------------------------------ mm/util.c | 4 +--- 3 files changed, 15 insertions(+), 35 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 848c7c82ad5ad0b..d8ed2e810ec4448 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -244,8 +244,9 @@ DEFINE_FREE(kfree, void *, if (_T) kfree(_T)) size_t ksize(const void *objp); #ifdef CONFIG_PRINTK -bool kmem_valid_obj(void *object); -void kmem_dump_obj(void *object); +bool kmem_dump_obj(void *object); +#else +static inline bool kmem_dump_obj(void *object) { return false; } #endif /* diff --git a/mm/slab_common.c b/mm/slab_common.c index d1555ea2981ac51..ee6ed6dd7ba9fa5 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -528,26 +528,6 @@ bool slab_is_available(void) } #ifdef CONFIG_PRINTK -/** - * kmem_valid_obj - does the pointer reference a valid slab object? - * @object: pointer to query. - * - * Return: %true if the pointer is to a not-yet-freed object from - * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer - * is to an already-freed object, and %false otherwise. - */ -bool kmem_valid_obj(void *object) -{ - struct folio *folio; - - /* Some arches consider ZERO_SIZE_PTR to be a valid address. */ - if (object < (void *)PAGE_SIZE || !virt_addr_valid(object)) - return false; - folio = virt_to_folio(object); - return folio_test_slab(folio); -} -EXPORT_SYMBOL_GPL(kmem_valid_obj); - static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) { if (__kfence_obj_info(kpp, object, slab)) @@ -566,11 +546,11 @@ static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab * * and, if available, the slab name, return address, and stack trace from * the allocation and last free path of that object. * - * This function will splat if passed a pointer to a non-slab object. - * If you are not sure what type of object you have, you should instead - * use mem_dump_obj(). + * Return: %true if the pointer is to a not-yet-freed object from + * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer + * is to an already-freed object, and %false otherwise. */ -void kmem_dump_obj(void *object) +bool kmem_dump_obj(void *object) { char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc"; int i; @@ -578,13 +558,13 @@ void kmem_dump_obj(void *object) unsigned long ptroffset; struct kmem_obj_info kp = { }; - if (WARN_ON_ONCE(!virt_addr_valid(object))) - return; + /* Some arches consider ZERO_SIZE_PTR to be a valid address. */ + if (object < (void *)PAGE_SIZE || !virt_addr_valid(object)) + return false; slab = virt_to_slab(object); - if (WARN_ON_ONCE(!slab)) { - pr_cont(" non-slab memory.\n"); - return; - } + if (!slab) + return false; + kmem_obj_info(&kp, object, slab); if (kp.kp_slab_cache) pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name); @@ -621,6 +601,7 @@ void kmem_dump_obj(void *object) pr_info(" %pS\n", kp.kp_free_stack[i]); } + return true; } EXPORT_SYMBOL_GPL(kmem_dump_obj); #endif diff --git a/mm/util.c b/mm/util.c index dd12b9531ac4cad..ddfbb22dc1876d3 100644 --- a/mm/util.c +++ b/mm/util.c @@ -1063,10 +1063,8 @@ void mem_dump_obj(void *object) { const char *type; - if (kmem_valid_obj(object)) { - kmem_dump_obj(object); + if (kmem_dump_obj(object)) return; - } if (vmalloc_dump_obj(object)) return; From patchwork Wed Aug 2 13:09:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen (ThunderTown)" X-Patchwork-Id: 129843 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:9f41:0:b0:3e4:2afc:c1 with SMTP id v1csp466553vqx; Wed, 2 Aug 2023 06:44:01 -0700 (PDT) X-Google-Smtp-Source: APBJJlEzeFRjccIgEM0jY2RBwhYrTiE2wg2l7AiiC7n0bl0JSLYsD8+fPCtnKTCs1tmkxwSV/DvO X-Received: by 2002:a05:6a00:16c1:b0:67e:e019:3a28 with SMTP id l1-20020a056a0016c100b0067ee0193a28mr17318746pfc.16.1690983841234; Wed, 02 Aug 2023 06:44:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690983841; cv=none; d=google.com; s=arc-20160816; b=lewhNZVQaKKtYlaX+48+VXLY1dly85tZLzQ22cIfYJfHpfaOhNy7o3MIDFvnNbl5KN FA+6+x14ObRuy1oHBXnHTxjvTCq/ibU5vCrPMqkCCODLQ6ERhqJUyL+OjtqHhzpYAoML DNx6FhidYh2w6x5kfqImBOq74r7XuAy9rWufIY2SpL1Xl2f8cakjIza45WZb1BLi0l7d knJ0PP31EsI9ZGuezrfviFmvK6Jactg1pFSc6i+8ROnBEqgsPAjPTgtntejbmzIFOR7p NapHY5uXck6FpB6imwNEt4cgDGRSqO4vxf7WjWVG9pFuZfRaoBWoyOcGZAaI07HZuO8F C1yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=ZNg1NuNZvUGXSeT6q2ekUaNEyMXy1GMhH4pcd4QMQCs=; fh=ixpq6thPVOOZuciljZRZ5gp6/ROsGXjbN7hNRM43S+c=; b=X7RUvxX2VaALlHfKlNdy6cUaWislpQFJ6Hw3e9lY0bTDfLIg4mk9Yvt3xewVTly+DT KdC7VQh5STxT08yqkgUpbmi5AEE/4tF00DLRt6c+HVAgpMCjy4citBGGBYyd3fp6IWH5 Vj+h5gSqcpSL+fJlY5nohzc28UqsG8+67541cAVToAzoaJeWg1wFfmLlXDJntYk+f2Bs 9OggFBBf1V2ByQ5UatDI/J1alfYHZSz8Nhwmlf6HYN+ZDkB7kA4cyO6ZeBwod+X2IkFS E/uNqA1CdpfY2pylEeo8KQlVivTAIFQuE+kQPzUrLoPdl5EKvEy/nS+Nx2LLr7bOGmoL bNXA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x35-20020a056a0018a300b006875bd6d8d9si2163660pfh.169.2023.08.02.06.43.47; Wed, 02 Aug 2023 06:44:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233244AbjHBNJj (ORCPT + 99 others); Wed, 2 Aug 2023 09:09:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49914 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232341AbjHBNJd (ORCPT ); Wed, 2 Aug 2023 09:09:33 -0400 Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04722268F; Wed, 2 Aug 2023 06:09:31 -0700 (PDT) Received: from mail02.huawei.com (unknown [172.30.67.143]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4RGC474wMPz4f3lXG; Wed, 2 Aug 2023 21:09:27 +0800 (CST) Received: from huaweicloud.com (unknown [10.174.178.55]) by APP4 (Coremail) with SMTP id gCh0CgAHvbCBVcpkDKSVPQ--.59231S6; Wed, 02 Aug 2023 21:09:27 +0800 (CST) From: thunder.leizhen@huaweicloud.com To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, "Paul E . McKenney" , Frederic Weisbecker , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Steven Rostedt , Mathieu Desnoyers , Lai Jiangshan , Zqiang , rcu@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Zhen Lei Subject: [PATCH v4 2/2] rcu: Dump memory object info if callback function is invalid Date: Wed, 2 Aug 2023 21:09:18 +0800 Message-Id: <20230802130918.1132-3-thunder.leizhen@huaweicloud.com> X-Mailer: git-send-email 2.37.3.windows.1 In-Reply-To: <20230802130918.1132-1-thunder.leizhen@huaweicloud.com> References: <20230802130918.1132-1-thunder.leizhen@huaweicloud.com> MIME-Version: 1.0 X-CM-TRANSID: gCh0CgAHvbCBVcpkDKSVPQ--.59231S6 X-Coremail-Antispam: 1UD129KBjvJXoWxtF47WrW7Kw1fury3tFWrZrb_yoW7Gw17pr ykuFy7Kw4kXFyrtay7Zw18WrWUA39Ygay3Ka95Crn3Cw4Ykw10gFyqyF12qrWYqFyrK34a qF1YqF43tw40ywUanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUPjb4IE77IF4wAFF20E14v26rWj6s0DM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6r1S6rWUM7CIcVAFz4kK6r1j6r18M28IrcIa0xkI8VA2jI8067AKxVWUXw A2048vs2IY020Ec7CjxVAFwI0_Xr0E3s1l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxS w2x7M28EF7xvwVC0I7IYx2IY67AKxVW7JVWDJwA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxV W8Jr0_Cr1UM28EF7xvwVC2z280aVAFwI0_GcCE3s1l84ACjcxK6I8E87Iv6xkF7I0E14v2 6rxl6s0DM2AIxVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMc Ij6xIIjxv20xvE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_ Jr0_Gr1lF7xvr2IYc2Ij64vIr41lFIxGxcIEc7CjxVA2Y2ka0xkIwI1lw4CEc2x0rVAKj4 xxMxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_ Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWrXwCIc40Y0x 0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWx JVW8Jr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMI IF0xvEx4A2jsIEc7CjxVAFwI0_Gr1j6F4UJbIYCTnIWIevJa73UjIFyTuYvjxUwzuAUUUU U X-CM-SenderInfo: hwkx0vthuozvpl2kv046kxt4xhlfz01xgou0bp/ X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1773088810889479563 X-GMAIL-MSGID: 1773125072334317638 From: Zhen Lei When a structure containing an RCU callback rhp is (incorrectly) freed and reallocated after rhp is passed to call_rcu(), it is not unusual for rhp->func to be set to NULL. This defeats the debugging prints used by __call_rcu_common() in kernels built with CONFIG_DEBUG_OBJECTS_RCU_HEAD=y, which expect to identify the offending code using the identity of this function. And in kernels build without CONFIG_DEBUG_OBJECTS_RCU_HEAD=y, things are even worse, as can be seen from this splat: Unable to handle kernel NULL pointer dereference at virtual address 0 ... ... PC is at 0x0 LR is at rcu_do_batch+0x1c0/0x3b8 ... ... (rcu_do_batch) from (rcu_core+0x1d4/0x284) (rcu_core) from (__do_softirq+0x24c/0x344) (__do_softirq) from (__irq_exit_rcu+0x64/0x108) (__irq_exit_rcu) from (irq_exit+0x8/0x10) (irq_exit) from (__handle_domain_irq+0x74/0x9c) (__handle_domain_irq) from (gic_handle_irq+0x8c/0x98) (gic_handle_irq) from (__irq_svc+0x5c/0x94) (__irq_svc) from (arch_cpu_idle+0x20/0x3c) (arch_cpu_idle) from (default_idle_call+0x4c/0x78) (default_idle_call) from (do_idle+0xf8/0x150) (do_idle) from (cpu_startup_entry+0x18/0x20) (cpu_startup_entry) from (0xc01530) This commit therefore adds calls to mem_dump_obj(rhp) to output some information, for example: slab kmalloc-256 start ffff410c45019900 pointer offset 0 size 256 This provides the rough size of the memory block and the offset of the rcu_head structure, which as least provides at least a few clues to help locate the problem. If the problem is reproducible, additional slab debugging can be enabled, for example, CONFIG_DEBUG_SLAB=y, which can provide significantly more information. Signed-off-by: Zhen Lei Signed-off-by: Paul E. McKenney --- kernel/rcu/rcu.h | 7 +++++++ kernel/rcu/srcutiny.c | 1 + kernel/rcu/srcutree.c | 1 + kernel/rcu/tasks.h | 1 + kernel/rcu/tiny.c | 1 + kernel/rcu/tree.c | 1 + 6 files changed, 12 insertions(+) diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h index d1dcb09750efbd6..bc81582238b9846 100644 --- a/kernel/rcu/rcu.h +++ b/kernel/rcu/rcu.h @@ -10,6 +10,7 @@ #ifndef __LINUX_RCU_H #define __LINUX_RCU_H +#include #include /* @@ -248,6 +249,12 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head) } #endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */ +static inline void debug_rcu_head_callback(struct rcu_head *rhp) +{ + if (unlikely(!rhp->func)) + kmem_dump_obj(rhp); +} + extern int rcu_cpu_stall_suppress_at_boot; static inline bool rcu_stall_is_suppressed_at_boot(void) diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c index 336af24e0fe358a..c38e5933a5d6937 100644 --- a/kernel/rcu/srcutiny.c +++ b/kernel/rcu/srcutiny.c @@ -138,6 +138,7 @@ void srcu_drive_gp(struct work_struct *wp) while (lh) { rhp = lh; lh = lh->next; + debug_rcu_head_callback(rhp); local_bh_disable(); rhp->func(rhp); local_bh_enable(); diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index f1a905200fc2f79..833a8f848a90ae6 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -1710,6 +1710,7 @@ static void srcu_invoke_callbacks(struct work_struct *work) rhp = rcu_cblist_dequeue(&ready_cbs); for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) { debug_rcu_head_unqueue(rhp); + debug_rcu_head_callback(rhp); local_bh_disable(); rhp->func(rhp); local_bh_enable(); diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index 7294be62727b12c..148ac6a464bfb12 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -538,6 +538,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags); len = rcl.len; for (rhp = rcu_cblist_dequeue(&rcl); rhp; rhp = rcu_cblist_dequeue(&rcl)) { + debug_rcu_head_callback(rhp); local_bh_disable(); rhp->func(rhp); local_bh_enable(); diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c index 42f7589e51e09e7..fec804b7908032d 100644 --- a/kernel/rcu/tiny.c +++ b/kernel/rcu/tiny.c @@ -97,6 +97,7 @@ static inline bool rcu_reclaim_tiny(struct rcu_head *head) trace_rcu_invoke_callback("", head); f = head->func; + debug_rcu_head_callback(head); WRITE_ONCE(head->func, (rcu_callback_t)0L); f(head); rcu_lock_release(&rcu_callback_map); diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 7c79480bfaa04e4..927c5ba0ae42269 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -2135,6 +2135,7 @@ static void rcu_do_batch(struct rcu_data *rdp) trace_rcu_invoke_callback(rcu_state.name, rhp); f = rhp->func; + debug_rcu_head_callback(rhp); WRITE_ONCE(rhp->func, (rcu_callback_t)0L); f(rhp);