From patchwork Fri Dec 15 03:29:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 179027 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:3b04:b0:fb:cd0c:d3e with SMTP id c4csp9015284dys; Thu, 14 Dec 2023 19:29:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IHXkvE19JyWSJkp/UZqUxrJqZV5eyaDeS8WK2S/8zLIB3USR6dNheh4QK3mt/mYI+paL1Wz X-Received: by 2002:a17:902:b408:b0:1d2:ec54:894b with SMTP id x8-20020a170902b40800b001d2ec54894bmr10912945plr.61.1702610948826; Thu, 14 Dec 2023 19:29:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702610948; cv=none; d=google.com; s=arc-20160816; b=ExAktSWL/ghi7PauPmPasdiHiDPkdTDwtlGCu4QHTVdG90RrGYGEC4yt+9qR8r+7te rnlyAv9Qb01EJXwilisTj1MbDUMfLigOwSlEs256jeSady6dRxUNEh6lo8xW4flWS2i/ a/NZpDzseFzbj0TeopqE2dJRz+hNzGo7AP9yK1tPvSMOhhcvPJ3qCu/ZL2gzZmO1UjoV +tMesHMvSEzhbODa/SsicMr4tFrxYUSqPy3rL2+n+Myi2W6zzKk4AWPWJq6UI0nFfiCb eGMd38uJ2dXZI22bA+7Wn5+x7ImkiLBx08Dikdnl/K2gwx0GIFAwWkRAjii+advVIbo+ po7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:subject:cc:to:from :date; bh=LxXgks/5TMT5GeVEuFADahsQuVKlE/xSXo4f1PsaVZA=; fh=FO4Y4xv2RrtJPtfB/JWbqd5JjQ8LHwgDXPtfiUQqZBg=; b=WGWDUz6v92HIiY4vRlK+YgXzrkPBGfix5mbe5r0hf34fSgbD/QcfVDObDDCXiinstu Dl89Bu8SxeyyyF7VyuTdY1rZ7vpNaChr/kvszriZodbiIu5KCTSY1BiSjIidY7hVZJJF 7xbCxa3JxLiW+y1+/UF6qNOhh9pGtS2QPRY/RJ3+QURvIu3DOFAsd8n2Q4/ZQiNSpY7l aUbUd6vlA349+4RGRhtA0AblMzctLTaeoZbc/3dgTTy6k/o5I1GrvCH0x3HRYJp4L+1L 1QQqzNvW5uQRWZJSrL+ko0lrJd75i3ep7QOo5NWU880C92kNi9ODsbK3ap5kUTAEQcLP fjWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-405-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-405-ouuuleilei=gmail.com@vger.kernel.org" Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id p11-20020a170903248b00b001d094766f1fsi12293247plw.404.2023.12.14.19.29.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 14 Dec 2023 19:29:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-405-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-405-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-405-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 0DE74B2213B for ; Fri, 15 Dec 2023 03:28:51 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DD8B56139; Fri, 15 Dec 2023 03:28:36 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52CD3CA70; Fri, 15 Dec 2023 03:28:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF97AC433C8; Fri, 15 Dec 2023 03:28:33 +0000 (UTC) Date: Thu, 14 Dec 2023 22:29:21 -0500 From: Steven Rostedt To: LKML , Linux Trace Kernel Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Joel Fernandes , Vincent Donnefort Subject: [PATCH v2] ring-buffer: Do not try to put back write_stamp Message-ID: <20231214222921.193037a7@gandalf.local.home> X-Mailer: Claws Mail 3.19.1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785316250768572445 X-GMAIL-MSGID: 1785316978220701243 From: "Steven Rostedt (Google)" If an update to an event is interrupted by another event between the time the initial event allocated its buffer and where it wrote to the write_stamp, the code try to reset the write stamp back to the what it had just overwritten. It knows that it was overwritten via checking the before_stamp, and if it didn't match what it wrote to the before_stamp before it allocated its space, it knows it was overwritten. To put back the write_stamp, it uses the before_stamp it read. The problem here is that by writing the before_stamp to the write_stamp it makes the two equal again, which means that the write_stamp can be considered valid as the last timestamp written to the ring buffer. But this is not necessarily true. The event that interrupted the event could have been interrupted in a way that it was interrupted as well, and can end up leaving with an invalid write_stamp. But if this happens and returns to this context that uses the before_stamp to update the write_stamp again, it can possibly incorrectly make it valid, causing later events to have in correct time stamps. As it is OK to leave this function with an invalid write_stamp (one that doesn't match the before_stamp), there's no reason to try to make it valid again in this case. If this race happens, then just leave with the invalid write_stamp and the next event to come along will just add a absolute timestamp and validate everything again. Cc: stable@vger.kernel.org Fixes: a389d86f7fd09 ("ring-buffer: Have nested events still record running time stamp") Signed-off-by: Steven Rostedt (Google) --- Changes since v1: https://lore.kernel.org/all/20231214221803.1a923e10@gandalf.local.home/ - I forgot to remove the commit that deletes the rb_time_* logic. Had to rebase the patch. kernel/trace/ring_buffer.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 1d9caee7f542..2668dde23343 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -3612,14 +3612,14 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, } if (likely(tail == w)) { - u64 save_before; - bool s_ok; - /* Nothing interrupted us between A and C */ /*D*/ rb_time_set(&cpu_buffer->write_stamp, info->ts); - barrier(); - /*E*/ s_ok = rb_time_read(&cpu_buffer->before_stamp, &save_before); - RB_WARN_ON(cpu_buffer, !s_ok); + /* + * If something came in between C and D, the write stamp + * may now not be in sync. But that's fine as the before_stamp + * will be different and then next event will just be forced + * to use an absolute timestamp. + */ if (likely(!(info->add_timestamp & (RB_ADD_STAMP_FORCE | RB_ADD_STAMP_ABSOLUTE)))) /* This did not interrupt any time update */ @@ -3627,24 +3627,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, else /* Just use full timestamp for interrupting event */ info->delta = info->ts; - barrier(); check_buffer(cpu_buffer, info, tail); - if (unlikely(info->ts != save_before)) { - /* SLOW PATH - Interrupted between C and E */ - - a_ok = rb_time_read(&cpu_buffer->write_stamp, &info->after); - RB_WARN_ON(cpu_buffer, !a_ok); - - /* Write stamp must only go forward */ - if (save_before > info->after) { - /* - * We do not care about the result, only that - * it gets updated atomically. - */ - (void)rb_time_cmpxchg(&cpu_buffer->write_stamp, - info->after, save_before); - } - } } else { u64 ts; /* SLOW PATH - Interrupted between A and C */