From patchwork Thu Dec 7 02:37:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174883 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515306vqy; Wed, 6 Dec 2023 18:38:20 -0800 (PST) X-Google-Smtp-Source: AGHT+IFl0FUKoKwMc7RFytqlPdV8VPr6FLDyxn44reAibRXk/VnPhXoXgRp1n23x/aZS62DxF4Oi X-Received: by 2002:a17:903:245:b0:1d0:6ffd:6e72 with SMTP id j5-20020a170903024500b001d06ffd6e72mr1837160plh.106.1701916699758; Wed, 06 Dec 2023 18:38:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916699; cv=none; d=google.com; s=arc-20160816; b=lvxykJf1GoFl4OVLL5T5Hai09KWRjP73emcUUHPklpF9+uCnDxXQj776egkSR5gaDd lc3t70asA8ikHq8WNvQXJE8bUfYQ52dGqkil3p14jlO9KIL3llXTym3MfGxSp3TTexvN OiqmxvS/wzJJQzDKan+QY/Gayog7wlE0gLdZnh/auoliGPMkJAhJIw8pp6I7FOEGSGWZ vBm2X37y0Wtpm+6683bsQ7OAzfOrFiMKT9zPKL/GryR68qfBFXSy2Ewq2Qrm18e4zL1j tAQjNZhYmLlEbSC6kJu4dVLsytyF6M8bbKlBXUx39XrrmahFNsVgD+RIdJ4YkkZK3je1 /mqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=nbn8HUaRmBoE3HYPnicG8vzgFreCvgjSTbWJu/ikQEs=; fh=hu/11vjhAVGN2soHkDrejjtKeeIeHno3fFH7CvZIC0Y=; b=EFsNZsio15tW8Mx9JzoA6A8AH72l+fIHXk8rlvMZV9T4xNRxNZPSrb/77SbLA2CsU/ RpF21JXmhXjFM7VWoo6DjvalEKcRYXgtVBobd1VlUMrdmmIl8FvCer8OW1p1qtnt8PSw O4M8Olmr40sEu/m8Zq1QsMMKqAFIMS198/3UaqVrYP2444LuSLcvONH2Xb2njX5lDFIL sDa8eZQzOSTuSefxIiHXxbbkiwHpIGPZJObzJhCq12XAXsBWaSLd366jfMafmJAhrkGI TgonJ3cS7nPO8P1xb8HlKGJczlHEziNIXGkP3ZOsyUNxm1IsOYGUDJhEM55HgpCCI3R7 tDsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id iz15-20020a170902ef8f00b001cfc3563badsi245399plb.629.2023.12.06.18.38.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:38:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id D9E7080916B5; Wed, 6 Dec 2023 18:37:53 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235208AbjLGChp (ORCPT + 99 others); Wed, 6 Dec 2023 21:37:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231272AbjLGChn (ORCPT ); Wed, 6 Dec 2023 21:37:43 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A63ECD51 for ; Wed, 6 Dec 2023 18:37:49 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 496E2C433C9; Thu, 7 Dec 2023 02:37:49 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HK-00000001aMj-49r6; Wed, 06 Dec 2023 21:38:19 -0500 Message-ID: <20231207023818.766723807@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:53 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org Subject: [for-linus][PATCH 1/8] tracing: Always update snapshot buffer size References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:37:54 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589005364909251 X-GMAIL-MSGID: 1784589005364909251 From: "Steven Rostedt (Google)" It use to be that only the top level instance had a snapshot buffer (for latency tracers like wakeup and irqsoff). The update of the ring buffer size would check if the instance was the top level and if so, it would also update the snapshot buffer as it needs to be the same as the main buffer. Now that lower level instances also has a snapshot buffer, they too need to update their snapshot buffer sizes when the main buffer is changed, otherwise the following can be triggered: # cd /sys/kernel/tracing # echo 1500 > buffer_size_kb # mkdir instances/foo # echo irqsoff > instances/foo/current_tracer # echo 1000 > instances/foo/buffer_size_kb Produces: WARNING: CPU: 2 PID: 856 at kernel/trace/trace.c:1938 update_max_tr_single.part.0+0x27d/0x320 Which is: ret = ring_buffer_swap_cpu(tr->max_buffer.buffer, tr->array_buffer.buffer, cpu); if (ret == -EBUSY) { [..] } WARN_ON_ONCE(ret && ret != -EAGAIN && ret != -EBUSY); <== here That's because ring_buffer_swap_cpu() has: int ret = -EINVAL; [..] /* At least make sure the two buffers are somewhat the same */ if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; [..] out: return ret; } Instead, update all instances' snapshot buffer sizes when their main buffer size is updated. Link: https://lkml.kernel.org/r/20231205220010.454662151@goodmis.org Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Fixes: 6d9b3fa5e7f6 ("tracing: Move tracing_max_latency into trace_array") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 9aebf904ff97..231c173ec04f 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -6392,8 +6392,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr, return ret; #ifdef CONFIG_TRACER_MAX_TRACE - if (!(tr->flags & TRACE_ARRAY_FL_GLOBAL) || - !tr->current_trace->use_max_tr) + if (!tr->current_trace->use_max_tr) goto out; ret = ring_buffer_resize(tr->max_buffer.buffer, size, cpu); From patchwork Thu Dec 7 02:37:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174888 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515497vqy; Wed, 6 Dec 2023 18:39:06 -0800 (PST) X-Google-Smtp-Source: AGHT+IFBN3pFXYNGT22HfhVef32NL8lIH1vHcrRBERWQqq7kl4nwlMI/eoKcNf+E4tck83YmRNnP X-Received: by 2002:a05:6808:9a4:b0:3b8:b063:9b79 with SMTP id e4-20020a05680809a400b003b8b0639b79mr1792059oig.107.1701916746316; Wed, 06 Dec 2023 18:39:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916746; cv=none; d=google.com; s=arc-20160816; b=ZAM9RDZQ5fAUTAojOp5H4msF0ouF443AU0mFum7M+lNInYEZq1F6DanE4HVJVFZD/u mLXYbBHQfS3S/LhbYFn2ZpSvT+wqz2s2OZPS6PBIlYZMORUvpR6WFacewtc+gqshfavJ aIJLVGVHCQDax2ZGw7NdMbUlPKnQKDJoHJrqmMCg57RgXkYPVIjoFeR01Li0FKNWa8Ys mb//r+jtHFhUvCLSn4a4gXE1k03SHfXdvdeD4bG0XJGQz/nrGMl5tUIU59VnW4VUuUo3 jOA1E/wVWm4sGFP2UjQXdtZ/z8fV2Fy049JrZkZTVobPXoZcxJLfj1CJUSmSmCnMHN4k lXLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=CVqBID7jPEWFGf86Nz8rBIBuCwHlXsuLrM1jXt9b0x4=; fh=hu/11vjhAVGN2soHkDrejjtKeeIeHno3fFH7CvZIC0Y=; b=d3vVEgRA25ODxYE6X6DcdvbWp3M7LdUJR/E7QnfSiPDAhU5pudmMDNeTOOEPEkKVG4 EqyJZ/MUg2eBb380qIlpqN7Vc1pyNeAWnnTBFu6h5b0hJtCvOGEcHl06Wue2ZRPf+KPT WDlD1h9zv640sMA/1XVTX0qdaiANaPAvBBuQefZOY26od1s4YPwgbf3M0A6rETcIKW7r bRkFMzqLWdmPOm7yrwjNy9hXuf60wdDY17ND0mPLYZDL5o9QoZ1kzvdeQz7nyfK1yhQL UQFj1qHcsatlKyOwL7wgsMVGhA4BhlXO3WNNVp2woCVbd58EEmy9ZDfOmlYj1ajCAZ6M DYNw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id h11-20020a056a00170b00b006cdfad63657si351429pfc.313.2023.12.06.18.39.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:39:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A194880755DD; Wed, 6 Dec 2023 18:38:05 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443074AbjLGChu (ORCPT + 99 others); Wed, 6 Dec 2023 21:37:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231284AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0AC8D5E for ; Wed, 6 Dec 2023 18:37:49 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 838A8C433CB; Thu, 7 Dec 2023 02:37:49 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HL-00000001aND-19dP; Wed, 06 Dec 2023 21:38:19 -0500 Message-ID: <20231207023819.059291958@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:54 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org Subject: [for-linus][PATCH 2/8] tracing: Stop current tracer when resizing buffer References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:05 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589054475760291 X-GMAIL-MSGID: 1784589054475760291 From: "Steven Rostedt (Google)" When the ring buffer is being resized, it can cause side effects to the running tracer. For instance, there's a race with irqsoff tracer that swaps individual per cpu buffers between the main buffer and the snapshot buffer. The resize operation modifies the main buffer and then the snapshot buffer. If a swap happens in between those two operations it will break the tracer. Simply stop the running tracer before resizing the buffers and enable it again when finished. Link: https://lkml.kernel.org/r/20231205220010.748996423@goodmis.org Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Fixes: 3928a8a2d9808 ("ftrace: make work with new ring buffer") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 231c173ec04f..e978868b1a22 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -6387,9 +6387,12 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr, if (!tr->array_buffer.buffer) return 0; + /* Do not allow tracing while resizng ring buffer */ + tracing_stop_tr(tr); + ret = ring_buffer_resize(tr->array_buffer.buffer, size, cpu); if (ret < 0) - return ret; + goto out_start; #ifdef CONFIG_TRACER_MAX_TRACE if (!tr->current_trace->use_max_tr) @@ -6417,7 +6420,7 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr, WARN_ON(1); tracing_disabled = 1; } - return ret; + goto out_start; } update_buffer_entries(&tr->max_buffer, cpu); @@ -6426,7 +6429,8 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr, #endif /* CONFIG_TRACER_MAX_TRACE */ update_buffer_entries(&tr->array_buffer, cpu); - + out_start: + tracing_start_tr(tr); return ret; } From patchwork Thu Dec 7 02:37:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174886 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515388vqy; Wed, 6 Dec 2023 18:38:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IGl+YIywV9CKHn22Pq9rry4Eh0+7iuVFCNM3dBETztjvmLYrI2cNcC5QIk+YRq1IajTLUSt X-Received: by 2002:a05:6a20:499a:b0:17e:8dfa:f37f with SMTP id fs26-20020a056a20499a00b0017e8dfaf37fmr3667101pzb.18.1701916723998; Wed, 06 Dec 2023 18:38:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916723; cv=none; d=google.com; s=arc-20160816; b=LQ1vBjDwNy9ERRtTSCWrvDrvGoKhlB8il9UpbbgTu0I9fEiBlTTczmhx9KSNhQO+YU AZ8i7UK9V350xdws6Usko6PmPl3wQzczeojAi6Gd7WBFw2IqgQthMyEyUPBR9o834ZGS GvyAlwXT04PUg9ny7WMIAM/fQtlrRwsUqV/8fAGyT388e6PEjteq62zsPzKATz4E2o8L kkNv9TMA0h3101gyWVSdngHkIrX9I17UO8co/+kO8ao0scRgftlYHmBHd1KDO0CbbCJb xoySG4q1dd0JlXtg9Qut88Lc45kXbu06+ntE5NfO17dJRY4qAu0BB98Ln1LvTBVF2tWZ 6nzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=IaOwoXmtEVkWdraJUp2tLyBqEH/+hJXL33wMjErTrAM=; fh=hu/11vjhAVGN2soHkDrejjtKeeIeHno3fFH7CvZIC0Y=; b=mX4whGvm22Yy5UW4IuMbZF7n75gBOZpWUGtB69+G8FJqdIC5MAMmU/92Zp5PhpQaCz XDgwtZGT8HYOprZC/I3IP85hbMQSN1JgGijGzi6IifmPax+oYq8p0OrEOf0tnPrIHxb6 0zgSll1QA/uuiWmsM5KHQwpc0SDVxtLn29AAsu9eGJZrxrJJN379aQQk8ZzkLW+fdbr2 N8qlywBDYnJLmj2IMANo3uLhHZshNtyYhwo3n7Tof1AFbb3Lpzn/0wAIbPJG8B7nSXEJ aWIHbJ9qD7sh5io0/dP8LUzj94bIkZHFMAIxcDV7K5BV9csU6DF8z+uADBn4gHFwNY1j oGeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from morse.vger.email (morse.vger.email. [23.128.96.31]) by mx.google.com with ESMTPS id v188-20020a632fc5000000b005be3683ec6asi312585pgv.184.2023.12.06.18.38.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:38:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) client-ip=23.128.96.31; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.31 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by morse.vger.email (Postfix) with ESMTP id 89B18809926C; Wed, 6 Dec 2023 18:38:38 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at morse.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443092AbjLGChx (ORCPT + 99 others); Wed, 6 Dec 2023 21:37:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235171AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FC5AD63 for ; Wed, 6 Dec 2023 18:37:50 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD421C433D9; Thu, 7 Dec 2023 02:37:49 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HL-00000001aNh-2Nqi; Wed, 06 Dec 2023 21:38:19 -0500 Message-ID: <20231207023819.347422078@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:55 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org Subject: [for-linus][PATCH 3/8] tracing: Disable snapshot buffer when stopping instance tracers References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on morse.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (morse.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:38 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589030886315925 X-GMAIL-MSGID: 1784589030886315925 From: "Steven Rostedt (Google)" It use to be that only the top level instance had a snapshot buffer (for latency tracers like wakeup and irqsoff). When stopping a tracer in an instance would not disable the snapshot buffer. This could have some unintended consequences if the irqsoff tracer is enabled. Consolidate the tracing_start/stop() with tracing_start/stop_tr() so that all instances behave the same. The tracing_start/stop() functions will just call their respective tracing_start/stop_tr() with the global_array passed in. Link: https://lkml.kernel.org/r/20231205220011.041220035@goodmis.org Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Andrew Morton Fixes: 6d9b3fa5e7f6 ("tracing: Move tracing_max_latency into trace_array") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 110 +++++++++++++------------------------------ 1 file changed, 34 insertions(+), 76 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index e978868b1a22..2492c6c76850 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2360,13 +2360,7 @@ int is_tracing_stopped(void) return global_trace.stop_count; } -/** - * tracing_start - quick start of the tracer - * - * If tracing is enabled but was stopped by tracing_stop, - * this will start the tracer back up. - */ -void tracing_start(void) +static void tracing_start_tr(struct trace_array *tr) { struct trace_buffer *buffer; unsigned long flags; @@ -2374,119 +2368,83 @@ void tracing_start(void) if (tracing_disabled) return; - raw_spin_lock_irqsave(&global_trace.start_lock, flags); - if (--global_trace.stop_count) { - if (global_trace.stop_count < 0) { + raw_spin_lock_irqsave(&tr->start_lock, flags); + if (--tr->stop_count) { + if (WARN_ON_ONCE(tr->stop_count < 0)) { /* Someone screwed up their debugging */ - WARN_ON_ONCE(1); - global_trace.stop_count = 0; + tr->stop_count = 0; } goto out; } /* Prevent the buffers from switching */ - arch_spin_lock(&global_trace.max_lock); + arch_spin_lock(&tr->max_lock); - buffer = global_trace.array_buffer.buffer; + buffer = tr->array_buffer.buffer; if (buffer) ring_buffer_record_enable(buffer); #ifdef CONFIG_TRACER_MAX_TRACE - buffer = global_trace.max_buffer.buffer; + buffer = tr->max_buffer.buffer; if (buffer) ring_buffer_record_enable(buffer); #endif - arch_spin_unlock(&global_trace.max_lock); - - out: - raw_spin_unlock_irqrestore(&global_trace.start_lock, flags); -} - -static void tracing_start_tr(struct trace_array *tr) -{ - struct trace_buffer *buffer; - unsigned long flags; - - if (tracing_disabled) - return; - - /* If global, we need to also start the max tracer */ - if (tr->flags & TRACE_ARRAY_FL_GLOBAL) - return tracing_start(); - - raw_spin_lock_irqsave(&tr->start_lock, flags); - - if (--tr->stop_count) { - if (tr->stop_count < 0) { - /* Someone screwed up their debugging */ - WARN_ON_ONCE(1); - tr->stop_count = 0; - } - goto out; - } - - buffer = tr->array_buffer.buffer; - if (buffer) - ring_buffer_record_enable(buffer); + arch_spin_unlock(&tr->max_lock); out: raw_spin_unlock_irqrestore(&tr->start_lock, flags); } /** - * tracing_stop - quick stop of the tracer + * tracing_start - quick start of the tracer * - * Light weight way to stop tracing. Use in conjunction with - * tracing_start. + * If tracing is enabled but was stopped by tracing_stop, + * this will start the tracer back up. */ -void tracing_stop(void) +void tracing_start(void) + +{ + return tracing_start_tr(&global_trace); +} + +static void tracing_stop_tr(struct trace_array *tr) { struct trace_buffer *buffer; unsigned long flags; - raw_spin_lock_irqsave(&global_trace.start_lock, flags); - if (global_trace.stop_count++) + raw_spin_lock_irqsave(&tr->start_lock, flags); + if (tr->stop_count++) goto out; /* Prevent the buffers from switching */ - arch_spin_lock(&global_trace.max_lock); + arch_spin_lock(&tr->max_lock); - buffer = global_trace.array_buffer.buffer; + buffer = tr->array_buffer.buffer; if (buffer) ring_buffer_record_disable(buffer); #ifdef CONFIG_TRACER_MAX_TRACE - buffer = global_trace.max_buffer.buffer; + buffer = tr->max_buffer.buffer; if (buffer) ring_buffer_record_disable(buffer); #endif - arch_spin_unlock(&global_trace.max_lock); + arch_spin_unlock(&tr->max_lock); out: - raw_spin_unlock_irqrestore(&global_trace.start_lock, flags); + raw_spin_unlock_irqrestore(&tr->start_lock, flags); } -static void tracing_stop_tr(struct trace_array *tr) +/** + * tracing_stop - quick stop of the tracer + * + * Light weight way to stop tracing. Use in conjunction with + * tracing_start. + */ +void tracing_stop(void) { - struct trace_buffer *buffer; - unsigned long flags; - - /* If global, we need to also stop the max tracer */ - if (tr->flags & TRACE_ARRAY_FL_GLOBAL) - return tracing_stop(); - - raw_spin_lock_irqsave(&tr->start_lock, flags); - if (tr->stop_count++) - goto out; - - buffer = tr->array_buffer.buffer; - if (buffer) - ring_buffer_record_disable(buffer); - - out: - raw_spin_unlock_irqrestore(&tr->start_lock, flags); + return tracing_stop_tr(&global_trace); } static int trace_save_cmdline(struct task_struct *tsk) From patchwork Thu Dec 7 02:37:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174887 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515417vqy; Wed, 6 Dec 2023 18:38:51 -0800 (PST) X-Google-Smtp-Source: AGHT+IGyREiV5KgcamNItn6TI1iCGdzALkdFGW7+32PE9KdyhYEi13Tuqyy85aC5eDD4bvVVL7dE X-Received: by 2002:a05:6358:5e13:b0:170:2c52:2b4d with SMTP id q19-20020a0563585e1300b001702c522b4dmr2245976rwn.19.1701916730994; Wed, 06 Dec 2023 18:38:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916730; cv=none; d=google.com; s=arc-20160816; b=EZElPvBNRcrSdcs6W/PMYqO0l+hW7yE40T8hp66v9pMFpMYgIANuupOzYYVVND0Sk/ WuEj7SCA7ynQ9oI+4T/rwyqr8F6Ei6dUYbCaebom5oijE76nizpFHDz1ay3twuovADvE VtHYVkmtqDsEwxY0bn0lbh8R0QI9NJf3hVAh7zDasQyWlh7bgkIL64+0JJUMoJRkcYcV JTI37dGtU3sU2rsvI5ThS5e8TdwqF/xKj71LvSQUdYhXq81mUzyqjlsjeNOUVDwzugJL T/MtVJoI9k1p1JyS4Xv1B2pJANAe2cQRuq5btpCjRF5YS/gjoHAjzU84tjuz1r9jzbWm exjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=y5c0CvJn5qgX/j8/K6G43e9AZ3LTr+EZoOO6RicJbBE=; fh=5t1KX07n8eCYcIm6uZaoB+ULW72xabC0j7iFTe6yRvg=; b=BLRQqGSaD+9XKLgmPaOMFUiQVEP7ahr1dx+2R4ZvsyWgTgW/X0IUjRkfe4ZCffwlba Vxp5JNBkMqnKkuDNBLacnnmKKAGqJTy/wQYWxRUODAYTYovNfZ8EWjuLkqfvBLCvrHVO y4OQ5xAKxIOV2Z8evJUXe0yUS+XQF9Art+kzxz2sDpdUm+Pudtz+8/ox1XkjI+EXHSOC mujLGGPLezzRlvhYD9415vwNtC/Lz7KEuT2R9z2wQzawA8l0HKwzGCnwVLe1NCE6EWkt M0UMc1IKkLU5+82uP3nz9fZTWjwzXL0sx+UHPU1p2Sw10aoYJAI9jc6AYlZmGwfMTqKG WyCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id m13-20020a656a0d000000b005c1b313a127si307915pgu.660.2023.12.06.18.38.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:38:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 4F06B81F364B; Wed, 6 Dec 2023 18:38:41 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235217AbjLGCh5 (ORCPT + 99 others); Wed, 6 Dec 2023 21:37:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235194AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74A02D66 for ; Wed, 6 Dec 2023 18:37:50 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20D9EC433CC; Thu, 7 Dec 2023 02:37:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HL-00000001aOB-3afS; Wed, 06 Dec 2023 21:38:19 -0500 Message-ID: <20231207023819.638901361@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:56 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org, Petr Pavlu Subject: [for-linus][PATCH 4/8] tracing: Fix incomplete locking when disabling buffered events References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:42 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589038108058347 X-GMAIL-MSGID: 1784589038108058347 From: Petr Pavlu The following warning appears when using buffered events: [ 203.556451] WARNING: CPU: 53 PID: 10220 at kernel/trace/ring_buffer.c:3912 ring_buffer_discard_commit+0x2eb/0x420 [...] [ 203.670690] CPU: 53 PID: 10220 Comm: stress-ng-sysin Tainted: G E 6.7.0-rc2-default #4 56e6d0fcf5581e6e51eaaecbdaec2a2338c80f3a [ 203.670704] Hardware name: Intel Corp. GROVEPORT/GROVEPORT, BIOS GVPRCRB1.86B.0016.D04.1705030402 05/03/2017 [ 203.670709] RIP: 0010:ring_buffer_discard_commit+0x2eb/0x420 [ 203.735721] Code: 4c 8b 4a 50 48 8b 42 48 49 39 c1 0f 84 b3 00 00 00 49 83 e8 01 75 b1 48 8b 42 10 f0 ff 40 08 0f 0b e9 fc fe ff ff f0 ff 47 08 <0f> 0b e9 77 fd ff ff 48 8b 42 10 f0 ff 40 08 0f 0b e9 f5 fe ff ff [ 203.735734] RSP: 0018:ffffb4ae4f7b7d80 EFLAGS: 00010202 [ 203.735745] RAX: 0000000000000000 RBX: ffffb4ae4f7b7de0 RCX: ffff8ac10662c000 [ 203.735754] RDX: ffff8ac0c750be00 RSI: ffff8ac10662c000 RDI: ffff8ac0c004d400 [ 203.781832] RBP: ffff8ac0c039cea0 R08: 0000000000000000 R09: 0000000000000000 [ 203.781839] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 [ 203.781842] R13: ffff8ac10662c000 R14: ffff8ac0c004d400 R15: ffff8ac10662c008 [ 203.781846] FS: 00007f4cd8a67740(0000) GS:ffff8ad798880000(0000) knlGS:0000000000000000 [ 203.781851] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 203.781855] CR2: 0000559766a74028 CR3: 00000001804c4000 CR4: 00000000001506f0 [ 203.781862] Call Trace: [ 203.781870] [ 203.851949] trace_event_buffer_commit+0x1ea/0x250 [ 203.851967] trace_event_raw_event_sys_enter+0x83/0xe0 [ 203.851983] syscall_trace_enter.isra.0+0x182/0x1a0 [ 203.851990] do_syscall_64+0x3a/0xe0 [ 203.852075] entry_SYSCALL_64_after_hwframe+0x6e/0x76 [ 203.852090] RIP: 0033:0x7f4cd870fa77 [ 203.982920] Code: 00 b8 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 66 90 b8 89 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 43 0e 00 f7 d8 64 89 01 48 [ 203.982932] RSP: 002b:00007fff99717dd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000089 [ 203.982942] RAX: ffffffffffffffda RBX: 0000558ea1d7b6f0 RCX: 00007f4cd870fa77 [ 203.982948] RDX: 0000000000000000 RSI: 00007fff99717de0 RDI: 0000558ea1d7b6f0 [ 203.982957] RBP: 00007fff99717de0 R08: 00007fff997180e0 R09: 00007fff997180e0 [ 203.982962] R10: 00007fff997180e0 R11: 0000000000000246 R12: 00007fff99717f40 [ 204.049239] R13: 00007fff99718590 R14: 0000558e9f2127a8 R15: 00007fff997180b0 [ 204.049256] For instance, it can be triggered by running these two commands in parallel: $ while true; do echo hist:key=id.syscall:val=hitcount > \ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger; done $ stress-ng --sysinfo $(nproc) The warning indicates that the current ring_buffer_per_cpu is not in the committing state. It happens because the active ring_buffer_event doesn't actually come from the ring_buffer_per_cpu but is allocated from trace_buffered_event. The bug is in function trace_buffered_event_disable() where the following normally happens: * The code invokes disable_trace_buffered_event() via smp_call_function_many() and follows it by synchronize_rcu(). This increments the per-CPU variable trace_buffered_event_cnt on each target CPU and grants trace_buffered_event_disable() the exclusive access to the per-CPU variable trace_buffered_event. * Maintenance is performed on trace_buffered_event, all per-CPU event buffers get freed. * The code invokes enable_trace_buffered_event() via smp_call_function_many(). This decrements trace_buffered_event_cnt and releases the access to trace_buffered_event. A problem is that smp_call_function_many() runs a given function on all target CPUs except on the current one. The following can then occur: * Task X executing trace_buffered_event_disable() runs on CPU 0. * The control reaches synchronize_rcu() and the task gets rescheduled on another CPU 1. * The RCU synchronization finishes. At this point, trace_buffered_event_disable() has the exclusive access to all trace_buffered_event variables except trace_buffered_event[CPU0] because trace_buffered_event_cnt[CPU0] is never incremented and if the buffer is currently unused, remains set to 0. * A different task Y is scheduled on CPU 0 and hits a trace event. The code in trace_event_buffer_lock_reserve() sees that trace_buffered_event_cnt[CPU0] is set to 0 and decides the use the buffer provided by trace_buffered_event[CPU0]. * Task X continues its execution in trace_buffered_event_disable(). The code incorrectly frees the event buffer pointed by trace_buffered_event[CPU0] and resets the variable to NULL. * Task Y writes event data to the now freed buffer and later detects the created inconsistency. The issue is observable since commit dea499781a11 ("tracing: Fix warning in trace_buffered_event_disable()") which moved the call of trace_buffered_event_disable() in __ftrace_event_enable_disable() earlier, prior to invoking call->class->reg(.. TRACE_REG_UNREGISTER ..). The underlying problem in trace_buffered_event_disable() is however present since the original implementation in commit 0fc1b09ff1ff ("tracing: Use temp buffer when filtering events"). Fix the problem by replacing the two smp_call_function_many() calls with on_each_cpu_mask() which invokes a given callback on all CPUs. Link: https://lore.kernel.org/all/20231127151248.7232-2-petr.pavlu@suse.com/ Link: https://lkml.kernel.org/r/20231205161736.19663-2-petr.pavlu@suse.com Cc: stable@vger.kernel.org Fixes: 0fc1b09ff1ff ("tracing: Use temp buffer when filtering events") Fixes: dea499781a11 ("tracing: Fix warning in trace_buffered_event_disable()") Signed-off-by: Petr Pavlu Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 2492c6c76850..6aeffa4a6994 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2781,11 +2781,9 @@ void trace_buffered_event_disable(void) if (--trace_buffered_event_ref) return; - preempt_disable(); /* For each CPU, set the buffer as used. */ - smp_call_function_many(tracing_buffer_mask, - disable_trace_buffered_event, NULL, 1); - preempt_enable(); + on_each_cpu_mask(tracing_buffer_mask, disable_trace_buffered_event, + NULL, true); /* Wait for all current users to finish */ synchronize_rcu(); @@ -2800,11 +2798,9 @@ void trace_buffered_event_disable(void) */ smp_wmb(); - preempt_disable(); /* Do the work on each cpu */ - smp_call_function_many(tracing_buffer_mask, - enable_trace_buffered_event, NULL, 1); - preempt_enable(); + on_each_cpu_mask(tracing_buffer_mask, enable_trace_buffered_event, NULL, + true); } static struct trace_buffer *temp_buffer; From patchwork Thu Dec 7 02:37:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174889 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515508vqy; Wed, 6 Dec 2023 18:39:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IFzbwAbLD3RyrxdwHch32ZMJ6Phm5r4djZVZSPe/xhwIIXzNRUe4XKWeyNtNqTEy2eJfuMq X-Received: by 2002:a05:6e02:4cc:b0:35d:6e56:3d1 with SMTP id f12-20020a056e0204cc00b0035d6e5603d1mr2089670ils.6.1701916747739; Wed, 06 Dec 2023 18:39:07 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916747; cv=none; d=google.com; s=arc-20160816; b=HrCU6jgzc98AVmc87e4JrwwrEn91cEKuqTU/2ws8DvMqT9o3zDnFAb1lYttZsvZy0G Oh44982z+H5IbQtbbqafCzVhkTs/uvW1TjNcJCI9/0a4w0TIKWYBnYnlaiXnDhR4ivGw 3ANJWdr5X71b64J+rYHjG7k6/VGJloHLPC2/4CEE3JWPcp2R09qL7YubQPCpmGY3gftW kp9FcYUVIV1dOdUF3XBRQRxSjq46dnM8zV709zUweEmAtfjr2clRuyHo2w37lpq2N26B SdYjV3fA4WGz5sKaIG6Rzd3TjXZmrb4LTCarGjc2jzjLX533jqydoVdgK9lTlsB4tqf3 DWyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=3ZkdtJdI/UokS3L4NgJq8Dg3KdMDnGi4pknNGtKWXMw=; fh=gM92KdUNVmqOPrcNIc7RydALOa0EYmKzf7iaV81eQY0=; b=rWExa0vjdotiy3mySBOs5A3LFZwZH6bPeWgiUlsBkuds9lEcDZh3UIn+wHdx/lwVUD NzIwV8FndbJ2SxiMAvIITGXAgqc+qGXKLqMz5YskD5SbnUx3jyKelsR2a2OzVieIR8A3 FHu7SXH8NxoUiPCUPRRvAqJ5v6iEeoe6KAEYIMXSYVLemaaYIj3wsuzf0pkQPzUxR3kY LZt/zNKDZTaQajkifqFKH04gyhH0tkqTsy+/TIMlmiStwPvHbLiC19BNs5nI9pIPhkP1 lrQxatHN/KhbXhaPIUrzVmH9YBh+IjKTgHMyXtYvKKjurdCrYYXZQCc7ox66P6YvkfFB bf7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id by23-20020a056a02059700b0056da0ae25a2si341788pgb.32.2023.12.06.18.39.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:39:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 3A2B78072802; Wed, 6 Dec 2023 18:38:10 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443077AbjLGCh4 (ORCPT + 99 others); Wed, 6 Dec 2023 21:37:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235175AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF9BBD68 for ; Wed, 6 Dec 2023 18:37:50 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55248C4339A; Thu, 7 Dec 2023 02:37:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HM-00000001aOg-0cKq; Wed, 06 Dec 2023 21:38:20 -0500 Message-ID: <20231207023819.931186714@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:57 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , Petr Pavlu Subject: [for-linus][PATCH 5/8] tracing: Fix a warning when allocating buffered events fails References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:10 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589055304857241 X-GMAIL-MSGID: 1784589055304857241 From: Petr Pavlu Function trace_buffered_event_disable() produces an unexpected warning when the previous call to trace_buffered_event_enable() fails to allocate pages for buffered events. The situation can occur as follows: * The counter trace_buffered_event_ref is at 0. * The soft mode gets enabled for some event and trace_buffered_event_enable() is called. The function increments trace_buffered_event_ref to 1 and starts allocating event pages. * The allocation fails for some page and trace_buffered_event_disable() is called for cleanup. * Function trace_buffered_event_disable() decrements trace_buffered_event_ref back to 0, recognizes that it was the last use of buffered events and frees all allocated pages. * The control goes back to trace_buffered_event_enable() which returns. The caller of trace_buffered_event_enable() has no information that the function actually failed. * Some time later, the soft mode is disabled for the same event. Function trace_buffered_event_disable() is called. It warns on "WARN_ON_ONCE(!trace_buffered_event_ref)" and returns. Buffered events are just an optimization and can handle failures. Make trace_buffered_event_enable() exit on the first failure and left any cleanup later to when trace_buffered_event_disable() is called. Link: https://lore.kernel.org/all/20231127151248.7232-2-petr.pavlu@suse.com/ Link: https://lkml.kernel.org/r/20231205161736.19663-3-petr.pavlu@suse.com Fixes: 0fc1b09ff1ff ("tracing: Use temp buffer when filtering events") Signed-off-by: Petr Pavlu Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 6aeffa4a6994..ef72354f61ce 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2728,8 +2728,11 @@ void trace_buffered_event_enable(void) for_each_tracing_cpu(cpu) { page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_NORETRY, 0); - if (!page) - goto failed; + /* This is just an optimization and can handle failures */ + if (!page) { + pr_err("Failed to allocate event buffer\n"); + break; + } event = page_address(page); memset(event, 0, sizeof(*event)); @@ -2743,10 +2746,6 @@ void trace_buffered_event_enable(void) WARN_ON_ONCE(1); preempt_enable(); } - - return; - failed: - trace_buffered_event_disable(); } static void enable_trace_buffered_event(void *data) From patchwork Thu Dec 7 02:37:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174884 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515349vqy; Wed, 6 Dec 2023 18:38:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGFyuFWOqoHNMtUJc2kiyL9ESliUD9yLHuVbUF5a43V9tIyJRUBLirl2pl4rVlOAPD88X8u X-Received: by 2002:a62:cf87:0:b0:6ce:2731:79f8 with SMTP id b129-20020a62cf87000000b006ce273179f8mr1289079pfg.46.1701916715437; Wed, 06 Dec 2023 18:38:35 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916715; cv=none; d=google.com; s=arc-20160816; b=qqDvAbDbeoW+5pkYjg3j1ST4CSGRxehqg4GEWgV+MDCY0DXJsuNorC9KKhS8PXCG5j /2e5q4o3qBDDzGKs+1Kne3Q8vsIWG/RL/srhsgCLuP3ezznrGh2mmvFKAHp4MmByIRBF 7SmPXfgASnzKPWY6CZRO7qd9pcsAe2Wh3RTGBQ0e8ezciaFzTZdNqgJ7BzOAM4Ij//8M 7B36UpyhTH9WZX76csgH74heD6uyPHC2zDHvUhjUyYlAB4pskzHGU6BkxdMApAVBC+iQ X/SjHTg2dNWfjMv7tiX2Odirn9E8H4bez/WH3ZdSzp/b7CfowDbW3fBNqBz3FSuYjQRa VZ8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=mbL44jnDa3WlNYRuHIY3a2lBcPx7Otqh/pPjFKiUAuw=; fh=5t1KX07n8eCYcIm6uZaoB+ULW72xabC0j7iFTe6yRvg=; b=NfWxccydWwi3whVKiImPg9MDkKOqnLQr4pM5H8lRIosPZvA4WIT3A7ZhW74VEQgH69 Z7470pZ4kyv59YpK820VcZOe2v5XV4KAbhQD6TLaIvRG6ec8+1ongPlWluwLRyjvjk9y JIk9CDI5EF8qVc0cR/gcYhobA3Q3tfcUwCsgVaKrgdF1HzA/ByxqyuSduxBezAZoF5Rx Hc2va7ibTvJTPt7yxP0H75x0liHaeDWqPkR0iOLfha8UqXwcCmrqudWXAQUDR0guyHdh +zBdsGNlsovk7AypzyBAqk/f5zZUB6xR1TFOx05vcrsOfM9KvU64rr2Bel79zo59nA0f PjGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from fry.vger.email (fry.vger.email. [2620:137:e000::3:8]) by mx.google.com with ESMTPS id a16-20020a056a000c9000b006cbd28f1f43si381647pfv.47.2023.12.06.18.38.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:38:35 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) client-ip=2620:137:e000::3:8; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:8 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 4D1B680DD87B; Wed, 6 Dec 2023 18:38:26 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443123AbjLGCiA (ORCPT + 99 others); Wed, 6 Dec 2023 21:38:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55778 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235178AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFF89D69 for ; Wed, 6 Dec 2023 18:37:50 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97C03C433AD; Thu, 7 Dec 2023 02:37:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HM-00000001aPB-1r1j; Wed, 06 Dec 2023 21:38:20 -0500 Message-ID: <20231207023820.217771011@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:58 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org, Petr Pavlu Subject: [for-linus][PATCH 6/8] tracing: Fix a possible race when disabling buffered events References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:26 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589021858723482 X-GMAIL-MSGID: 1784589021858723482 From: Petr Pavlu Function trace_buffered_event_disable() is responsible for freeing pages backing buffered events and this process can run concurrently with trace_event_buffer_lock_reserve(). The following race is currently possible: * Function trace_buffered_event_disable() is called on CPU 0. It increments trace_buffered_event_cnt on each CPU and waits via synchronize_rcu() for each user of trace_buffered_event to complete. * After synchronize_rcu() is finished, function trace_buffered_event_disable() has the exclusive access to trace_buffered_event. All counters trace_buffered_event_cnt are at 1 and all pointers trace_buffered_event are still valid. * At this point, on a different CPU 1, the execution reaches trace_event_buffer_lock_reserve(). The function calls preempt_disable_notrace() and only now enters an RCU read-side critical section. The function proceeds and reads a still valid pointer from trace_buffered_event[CPU1] into the local variable "entry". However, it doesn't yet read trace_buffered_event_cnt[CPU1] which happens later. * Function trace_buffered_event_disable() continues. It frees trace_buffered_event[CPU1] and decrements trace_buffered_event_cnt[CPU1] back to 0. * Function trace_event_buffer_lock_reserve() continues. It reads and increments trace_buffered_event_cnt[CPU1] from 0 to 1. This makes it believe that it can use the "entry" that it already obtained but the pointer is now invalid and any access results in a use-after-free. Fix the problem by making a second synchronize_rcu() call after all trace_buffered_event values are set to NULL. This waits on all potential users in trace_event_buffer_lock_reserve() that still read a previous pointer from trace_buffered_event. Link: https://lore.kernel.org/all/20231127151248.7232-2-petr.pavlu@suse.com/ Link: https://lkml.kernel.org/r/20231205161736.19663-4-petr.pavlu@suse.com Cc: stable@vger.kernel.org Fixes: 0fc1b09ff1ff ("tracing: Use temp buffer when filtering events") Signed-off-by: Petr Pavlu Signed-off-by: Steven Rostedt (Google) --- kernel/trace/trace.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index ef72354f61ce..fbcd3bafb93e 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -2791,13 +2791,17 @@ void trace_buffered_event_disable(void) free_page((unsigned long)per_cpu(trace_buffered_event, cpu)); per_cpu(trace_buffered_event, cpu) = NULL; } + /* - * Make sure trace_buffered_event is NULL before clearing - * trace_buffered_event_cnt. + * Wait for all CPUs that potentially started checking if they can use + * their event buffer only after the previous synchronize_rcu() call and + * they still read a valid pointer from trace_buffered_event. It must be + * ensured they don't see cleared trace_buffered_event_cnt else they + * could wrongly decide to use the pointed-to buffer which is now freed. */ - smp_wmb(); + synchronize_rcu(); - /* Do the work on each cpu */ + /* For each CPU, relinquish the buffer */ on_each_cpu_mask(tracing_buffer_mask, enable_trace_buffered_event, NULL, true); } From patchwork Thu Dec 7 02:37:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174890 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515529vqy; Wed, 6 Dec 2023 18:39:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IG9LzosBMbLyGXLlcKifj5mEBigJSH2hB/9IBywVSlIQjlbmieWzge6/n0UAQoem/gaj2WU X-Received: by 2002:a17:902:e5cd:b0:1d0:9228:575e with SMTP id u13-20020a170902e5cd00b001d09228575emr2296747plf.43.1701916751034; Wed, 06 Dec 2023 18:39:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916751; cv=none; d=google.com; s=arc-20160816; b=L1vj0WEe/xNaZDbzg10bqaQmq8Tibb++I6YREm1C5bmNlVqfJZ+qCfbFULtRzLfo8u O5Cwk3QASFVFWPIFaWp8FVcbVr4TD+lS9IbArSUXga4/B50hKcJj/XJhKbIU9cIGb9zy uP+Lx4EoD9ELeNVi2TAJOpVtAvol7u7JYeEgHezEIYmG1q0V+iYFnZBTn+J71nWQ1pVL xhkPXhvTi8K0orf661bZh/TPnlY4TdGkgks79qOlMS7IFHFq+0erYgOSAC8wGgy//IdA 6WsntmxLapic/JxGFlxDu5iykV645yBJhDv4jDjiAiijHxO8zPOher5dw281ons+fMnb DCFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=0iPwape2cBcGbiWjABF57dqt9GgCEXbViVWrdaFTky0=; fh=hu/11vjhAVGN2soHkDrejjtKeeIeHno3fFH7CvZIC0Y=; b=NKcP4Auc8MtScMSQfGeOUYP8dk3hUiS10uexI56YwrEOmT8eSEoQkxZFpWZWDNG00V sTTYKZDJmKIdNas+fdlKkpRsBtdYlY1IZ3MOZP84wdmxKyHyfM3XkIz5NnG9UgGR9nta Nzb4AlsgQBvakH+fgNRksgpPh9OgjcgbdegXXwNyj22bXuZR/ecVUgX2HO/chcViqPqo 3u3t53rSJjelNoo8NmTc0Ii+5xC0Lj7bzYfIOTHMGkqiNxFBHcQyoR6sHlVg8AlkKq+K LfVKgnM7T0yi0A1nbb9QPGhHwb/HbJugQoH5KpnsvjW+Q1mHTRlc7w8GekgTSdtNBlJP sERA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id y18-20020a170902d65200b001d0b3c280f9si251616plh.501.2023.12.06.18.39.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:39:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id EED9F805802E; Wed, 6 Dec 2023 18:38:19 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443129AbjLGCiD (ORCPT + 99 others); Wed, 6 Dec 2023 21:38:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235202AbjLGCho (ORCPT ); Wed, 6 Dec 2023 21:37:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 494C0D4B for ; Wed, 6 Dec 2023 18:37:51 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E78F6C433C8; Thu, 7 Dec 2023 02:37:50 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HM-00000001aPg-32s9; Wed, 06 Dec 2023 21:38:20 -0500 Message-ID: <20231207023820.509376496@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:37:59 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org Subject: [for-linus][PATCH 7/8] ring-buffer: Force absolute timestamp on discard of event References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-4.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_MED,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:20 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589059113919751 X-GMAIL-MSGID: 1784589059113919751 From: "Steven Rostedt (Google)" There's a race where if an event is discarded from the ring buffer and an interrupt were to happen at that time and insert an event, the time stamp is still used from the discarded event as an offset. This can screw up the timings. If the event is going to be discarded, set the "before_stamp" to zero. When a new event comes in, it compares the "before_stamp" with the "write_stamp" and if they are not equal, it will insert an absolute timestamp. This will prevent the timings from getting out of sync due to the discarded event. Link: https://lore.kernel.org/linux-trace-kernel/20231206100244.5130f9b3@gandalf.local.home Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Fixes: 6f6be606e763f ("ring-buffer: Force before_stamp and write_stamp to be different on discard") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 43cc47d7faaf..a6da2d765c78 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -3030,22 +3030,19 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, local_read(&bpage->write) & ~RB_WRITE_MASK; unsigned long event_length = rb_event_length(event); + /* + * For the before_stamp to be different than the write_stamp + * to make sure that the next event adds an absolute + * value and does not rely on the saved write stamp, which + * is now going to be bogus. + */ + rb_time_set(&cpu_buffer->before_stamp, 0); + /* Something came in, can't discard */ if (!rb_time_cmpxchg(&cpu_buffer->write_stamp, write_stamp, write_stamp - delta)) return false; - /* - * It's possible that the event time delta is zero - * (has the same time stamp as the previous event) - * in which case write_stamp and before_stamp could - * be the same. In such a case, force before_stamp - * to be different than write_stamp. It doesn't - * matter what it is, as long as its different. - */ - if (!delta) - rb_time_set(&cpu_buffer->before_stamp, 0); - /* * If an event were to come in now, it would see that the * write_stamp and the before_stamp are different, and assume From patchwork Thu Dec 7 02:38:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 174885 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:bcd1:0:b0:403:3b70:6f57 with SMTP id r17csp4515360vqy; Wed, 6 Dec 2023 18:38:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IGni0Ogh30BVwx+1AtQyUx3AZihnH7dTvLyC+m0vSR6YCshF2Cl6+GHzMX6+p43otBy/276 X-Received: by 2002:a05:6358:782:b0:170:17eb:2fab with SMTP id n2-20020a056358078200b0017017eb2fabmr1554703rwj.36.1701916718559; Wed, 06 Dec 2023 18:38:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701916718; cv=none; d=google.com; s=arc-20160816; b=CP5WxvMhsS5iHkp76X0XR8VoIdsPO6gFVUr/kpctskJI34Et5xQGIHJlG9shGH2MRW 8bGoLrS/z26GS3fWE9yKTPNzhGg0xjsfTYlOkabgTqk9K73lgSbdFGEIDrrkE64x+KGQ bvhdSuDXqouO5EOL8cPBUCVZSuu/tSmC2nKRoOMx2CyZu2A8SpE7fFkPXYsLYN37eXvS UVJKLsszpsVMvUnga/iz4nx8g7ygjfJYgUAyXbZ2mhL2tTf5O3gS+WUf/Agvoz2mmIYO i4dRbC0ND+yk4gxwSyAsyW+ACAi4eSWNEjvfEBU8IcUDo7mX1JsvUYnRt62TMWARoWuq +6KA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=JJwxI75ezX9ei5LZt8tRb50LUhtpIyrSNzy4dlq6nVU=; fh=hu/11vjhAVGN2soHkDrejjtKeeIeHno3fFH7CvZIC0Y=; b=ioV2rMonInPiqjLLSg5hjwp2rwOnqkeI31vT39NnAY9wYmUayW2mGbogN5y/CRsOJm cdA5jykokTA9aqQX4kwcTy/f+PnHM9AWcvJrAXQwjyt3GrVnaLTXrxsAKMJJp1mM5cJx QKVINca8z6sfC4i8B2Nalk/eoo3vJb+Ix+Lt5CSDAwTY17t/whIgSkN3n0IGDF5zzJ4p uSpWSM3nbBiIzQ0qT7Ty2suopEHEJ4iWlso0hBtrMtPdZJ3Rypof3AvPCiK8Wz5Z9hcm tjNRnvFJuhuEz+v3NRVPL+Ejpd6ZWOmVMTQTp6t9FORI81dBH9wiPOmB8VNYx77s4m3h Ld9w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id o4-20020a056a0015c400b006c4b01d5f85si378429pfu.16.2023.12.06.18.38.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 18:38:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 27615801B32F; Wed, 6 Dec 2023 18:38:30 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443025AbjLGCiG (ORCPT + 99 others); Wed, 6 Dec 2023 21:38:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443058AbjLGChp (ORCPT ); Wed, 6 Dec 2023 21:37:45 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8572AD51 for ; Wed, 6 Dec 2023 18:37:51 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3ACC2C433C9; Thu, 7 Dec 2023 02:37:51 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rB4HN-00000001aQB-03PE; Wed, 06 Dec 2023 21:38:21 -0500 Message-ID: <20231207023820.797727180@goodmis.org> User-Agent: quilt/0.67 Date: Wed, 06 Dec 2023 21:38:00 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org Subject: [for-linus][PATCH 8/8] ring-buffer: Test last update in 32bit version of __rb_time_read() References: <20231207023752.712829638@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Wed, 06 Dec 2023 18:38:30 -0800 (PST) X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1784589025011947378 X-GMAIL-MSGID: 1784589025011947378 From: "Steven Rostedt (Google)" Since 64 bit cmpxchg() is very expensive on 32bit architectures, the timestamp used by the ring buffer does some interesting tricks to be able to still have an atomic 64 bit number. It originally just used 60 bits and broke it up into two 32 bit words where the extra 2 bits were used for synchronization. But this was not enough for all use cases, and all 64 bits were required. The 32bit version of the ring buffer timestamp was then broken up into 3 32bit words using the same counter trick. But one update was not done. The check to see if the read operation was done without interruption only checked the first two words and not last one (like it had before this update). Fix it by making sure all three updates happen without interruption by comparing the initial counter with the last updated counter. Link: https://lore.kernel.org/linux-trace-kernel/20231206100050.3100b7bb@gandalf.local.home Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Fixes: f03f2abce4f39 ("ring-buffer: Have 32 bit time stamps use all 64 bits") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index a6da2d765c78..8d2a4f00eca9 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -644,8 +644,8 @@ static inline bool __rb_time_read(rb_time_t *t, u64 *ret, unsigned long *cnt) *cnt = rb_time_cnt(top); - /* If top and bottom counts don't match, this interrupted a write */ - if (*cnt != rb_time_cnt(bottom)) + /* If top and msb counts don't match, this interrupted a write */ + if (*cnt != rb_time_cnt(msb)) return false; /* The shift to msb will lose its cnt bits */