From patchwork Fri Dec 15 12:41:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 179255 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp9240989dys; Fri, 15 Dec 2023 04:42:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IH2tkDIoG9ihyhANU4ESx+TKUQxYHEh7JEv2Jwerah7zxxokUzd8O1/mrWEZNguPN/TfG/Z X-Received: by 2002:a50:cd15:0:b0:552:e463:a388 with SMTP id z21-20020a50cd15000000b00552e463a388mr45696edi.40.1702644131224; Fri, 15 Dec 2023 04:42:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702644131; cv=none; d=google.com; s=arc-20160816; b=fJzyqM+DgRzDQF3rxpCup1sIS1VM4nGlSqg5k+yTYSWCw14VDvKIatsn5L4qQlfE// TYIDXROEAJ/Ro3F2h+Ku5zl3ggHv11SQ0cGXyAFUwculwmeLTLF0lbzC+sB6/+so1kDC zYURK1Oy3yzR0xzutXa1bkpyjalZh77wJYX1jd1/BdK2jTb6tk8B+KnoLKb2AIYQxAKZ kPH40F/0GqEGAuF3h+aPm1julcZJgIXeLpGdogEigx1Q0Zh1SXSdo0nOywxbdkbtCox7 e2zYRsIqhzJBgLm1fV1gESxOJ8AWFaf819MFzFq+IJi5sTVRqH1zedQAhZkf0hcYQhzA z1XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:subject:cc:to:from :date; bh=7z+piX4df9VKocH6uiWV5Wmm0Tkzb+9eVifRoZMebuw=; fh=xPjCpP/Loq+GqS6M4GNq073Vb9bUB1QAfG+asqam9ck=; b=DLVLUl9VX3rqFl1E5Ujr73PHQorTBDEhJhEccrasQGx0EQ9ROqqhm/dX5IRKX4xU3v j0SZPvc1dA2CtW5wYniVp1E+MOG68sxId4iwCRQytYtJ1waLNQrkdD37zIboUTw9H9AV Djnk/m186I4OYub9t585yy0T0U/zl5axSYMmx+eH1l6WtgAIlE3Sz4Pe8PAL6KHpJu+U pm317l4eF5cQPV1jJlKdIw+Xyh14YKUUOMgI4sBeYY9Zos1/K5wbrwf/GtmXTaAjkwK0 +wI3uAkVBgc5Cp2BgluIRIysORdIgJ5X7dbY+WptNLjf1pzaEYjFfhCyyJBQWgg/L533 4OXw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-940-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-940-ouuuleilei=gmail.com@vger.kernel.org" Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id 29-20020a50875d000000b0054ca9b73228si2932086edv.309.2023.12.15.04.42.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 04:42:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-940-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-940-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-940-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id CE6CD1F2334A for ; Fri, 15 Dec 2023 12:42:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E72ED2C6BA; Fri, 15 Dec 2023 12:41:55 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CE162D791; Fri, 15 Dec 2023 12:41:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2F74DC433C8; Fri, 15 Dec 2023 12:41:53 +0000 (UTC) Date: Fri, 15 Dec 2023 07:41:51 -0500 From: Steven Rostedt To: LKML , Linux trace kernel Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Joel Fernandes , Vincent Donnefort Subject: [PATCH] ring-buffer: Remove useless update to write_stamp in rb_try_to_discard() Message-ID: <20231215074151.447e14c2@rorschach.local.home> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785351772198956434 X-GMAIL-MSGID: 1785351772198956434 From: "Steven Rostedt (Google)" When filtering is enabled, a temporary buffer is created to place the content of the trace event output so that the filter logic can decide from the trace event output if the trace event should be filtered out or not. If it is to be filtered out, the content in the temporary buffer is simply discarded, otherwise it is written into the trace buffer. But if an interrupt were to come in while a previous event was using that temporary buffer, the event written by the interrupt would actually go into the ring buffer itself to prevent corrupting the data on the temporary buffer. If the event is to be filtered out, the event in the ring buffer is discarded, or if it fails to discard because another event were to have already come in, it is turned into padding. The update to the write_stamp in the rb_try_to_discard() happens after a fix was made to force the next event after the discard to use an absolute timestamp by setting the before_stamp to zero so it does not match the write_stamp (which causes an event to use the absolute timestamp). But there's an effort in rb_try_to_discard() to put back the write_stamp to what it was before the event was added. But this is useless and wasteful because nothing is going to be using that write_stamp for calculations as it still will not match the before_stamp. Remove this useless update, and in doing so, we remove another cmpxchg64()! Also update the comments to reflect this change as well as remove some extra white space in another comment. Fixes: b2dd797543cf ("ring-buffer: Force absolute timestamp on discard of event") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 46 +++++++++----------------------------- 1 file changed, 11 insertions(+), 35 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 378ff9e53fe6..56b97a16ea16 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2792,25 +2792,6 @@ static unsigned rb_calculate_event_length(unsigned length) return length; } -static u64 rb_time_delta(struct ring_buffer_event *event) -{ - switch (event->type_len) { - case RINGBUF_TYPE_PADDING: - return 0; - - case RINGBUF_TYPE_TIME_EXTEND: - return rb_event_time_stamp(event); - - case RINGBUF_TYPE_TIME_STAMP: - return 0; - - case RINGBUF_TYPE_DATA: - return event->time_delta; - default: - return 0; - } -} - static inline bool rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, struct ring_buffer_event *event) @@ -2818,8 +2799,6 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, unsigned long new_index, old_index; struct buffer_page *bpage; unsigned long addr; - u64 write_stamp; - u64 delta; new_index = rb_event_index(event); old_index = new_index + rb_event_ts_length(event); @@ -2828,13 +2807,10 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, bpage = READ_ONCE(cpu_buffer->tail_page); - delta = rb_time_delta(event); - - rb_time_read(&cpu_buffer->write_stamp, &write_stamp); - - /* Make sure the write stamp is read before testing the location */ - barrier(); - + /* + * Make sure the tail_page is still the same and + * the next write location is the end of this event + */ if (bpage->page == (void *)addr && rb_page_write(bpage) == old_index) { unsigned long write_mask = local_read(&bpage->write) & ~RB_WRITE_MASK; @@ -2845,20 +2821,20 @@ rb_try_to_discard(struct ring_buffer_per_cpu *cpu_buffer, * to make sure that the next event adds an absolute * value and does not rely on the saved write stamp, which * is now going to be bogus. + * + * By setting the before_stamp to zero, the next event + * is not going to use the write_stamp and will instead + * create an absolute timestamp. This means there's no + * reason to update the wirte_stamp! */ rb_time_set(&cpu_buffer->before_stamp, 0); - /* Something came in, can't discard */ - if (!rb_time_cmpxchg(&cpu_buffer->write_stamp, - write_stamp, write_stamp - delta)) - return false; - /* * If an event were to come in now, it would see that the * write_stamp and the before_stamp are different, and assume * that this event just added itself before updating * the write stamp. The interrupting event will fix the - * write stamp for us, and use the before stamp as its delta. + * write stamp for us, and use an absolute timestamp. */ /* @@ -3295,7 +3271,7 @@ static void check_buffer(struct ring_buffer_per_cpu *cpu_buffer, return; /* - * If this interrupted another event, + * If this interrupted another event, */ if (atomic_inc_return(this_cpu_ptr(&checking)) != 1) goto out;