From patchwork Sat Dec 16 04:22:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 179813 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:24d3:b0:fb:cd0c:d3e with SMTP id r19csp30535dyi; Fri, 15 Dec 2023 20:25:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IEAJxfxD5dUpiE4GCepx2dVh/77MhAbtze9VOBbpspQvYL2JgCqL5BA1+4z1XDrGqqG65BG X-Received: by 2002:a05:6808:124a:b0:3b8:b063:8261 with SMTP id o10-20020a056808124a00b003b8b0638261mr15124045oiv.99.1702700718925; Fri, 15 Dec 2023 20:25:18 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702700718; cv=none; d=google.com; s=arc-20160816; b=EDFlSaY1bt+EY2keZvu4+sk2ZCi9vAg+ksD7vW0KX3zhNPBH1lEyYDA29gfRT+JRSN AvOfcNCBqHzJCQRgrXO/Lh8xTSsSDpDpiA65v28yZeBT2BwdIqx4PJDKrKjMWGKPCM+T VdNjtHBBnwgZBDxRUeUCgD+sEWIEKZq6sGpZ9XCdlr3dRsaFEz3fpL5/P6PltsTnv0fQ pLQnkyjlMvQyq+6WaYgInW0Z/tD0GREdAL8CBOi3trbbErW7xPAR9AODlMEpSPnMuLaR wjOfelcvWNmjxNvvYIohUauUqSDv59eaRoIgt7WBS00dO82eh88R6SRPlcsqqmbVPCR2 CoCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :references:subject:cc:to:from:date:user-agent:message-id; bh=oTbzVo2gUqjoybDDrE8akHhAZYfzGsu+J6VYZ8GFSeE=; fh=2pwdZbs71J8w5U0NvpP4HP8qFQa6exIsPiz6XoqdMn4=; b=DjyJhwLKYU6w9LqdgMcHfWHSZ47VyJGd5p5iFzOYF7XrqRe9/+GZ8Y0BqCauJL4Q2J kLIYYuoR8O5X5x5DR499/sLQy7KKP4N7rsIzS/mNTMdlHU+l63JLWDzXg3Sc94XnvvQk qVFkTDTJbt0RaRjtAn4S1U09bpj6muV8Xm4EMFst/mE0x1GZOrp0tFNIiD42gNvqHjFG DvpYKbtQvivmlTcHYsQE8UKNe5rImA+MAsAq0a6sg1Cj8C0y/jiVXhZtYWf9V3zr9gs1 kL9OZ883LrAbykB5pYmEJS04hIMzTEUHHpf8jIXt2eTPSgWV8WwAQoBcMPzLsZudg4yD KRGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-2053-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-2053-ouuuleilei=gmail.com@vger.kernel.org" Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id a64-20020a639043000000b005cd7caf17fbsi457104pge.408.2023.12.15.20.25.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Dec 2023 20:25:18 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-2053-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-2053-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-2053-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9FD06284D30 for ; Sat, 16 Dec 2023 04:25:18 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id DF95432C83; Sat, 16 Dec 2023 04:21:55 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4099519BA5; Sat, 16 Dec 2023 04:21:53 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5E6EC433CB; Sat, 16 Dec 2023 04:21:53 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from ) id 1rEMCK-00000002yFF-3BcO; Fri, 15 Dec 2023 23:22:44 -0500 Message-ID: <20231216042244.539165490@goodmis.org> User-Agent: quilt/0.67 Date: Fri, 15 Dec 2023 23:22:24 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Mark Rutland , Mathieu Desnoyers , Andrew Morton , stable@vger.kernel.org, Joel Fernandes , Vincent Donnefort Subject: [for-linus][PATCH 10/15] ring-buffer: Do not try to put back write_stamp References: <20231216042214.905262999@goodmis.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785411108967968104 X-GMAIL-MSGID: 1785411108967968104 From: "Steven Rostedt (Google)" If an update to an event is interrupted by another event between the time the initial event allocated its buffer and where it wrote to the write_stamp, the code try to reset the write stamp back to the what it had just overwritten. It knows that it was overwritten via checking the before_stamp, and if it didn't match what it wrote to the before_stamp before it allocated its space, it knows it was overwritten. To put back the write_stamp, it uses the before_stamp it read. The problem here is that by writing the before_stamp to the write_stamp it makes the two equal again, which means that the write_stamp can be considered valid as the last timestamp written to the ring buffer. But this is not necessarily true. The event that interrupted the event could have been interrupted in a way that it was interrupted as well, and can end up leaving with an invalid write_stamp. But if this happens and returns to this context that uses the before_stamp to update the write_stamp again, it can possibly incorrectly make it valid, causing later events to have in correct time stamps. As it is OK to leave this function with an invalid write_stamp (one that doesn't match the before_stamp), there's no reason to try to make it valid again in this case. If this race happens, then just leave with the invalid write_stamp and the next event to come along will just add a absolute timestamp and validate everything again. Bonus points: This gets rid of another cmpxchg64! Link: https://lore.kernel.org/linux-trace-kernel/20231214222921.193037a7@gandalf.local.home Cc: stable@vger.kernel.org Cc: Masami Hiramatsu Cc: Mark Rutland Cc: Mathieu Desnoyers Cc: Joel Fernandes Cc: Vincent Donnefort Fixes: a389d86f7fd09 ("ring-buffer: Have nested events still record running time stamp") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 29 ++++++----------------------- 1 file changed, 6 insertions(+), 23 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 1d9caee7f542..2668dde23343 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -3612,14 +3612,14 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, } if (likely(tail == w)) { - u64 save_before; - bool s_ok; - /* Nothing interrupted us between A and C */ /*D*/ rb_time_set(&cpu_buffer->write_stamp, info->ts); - barrier(); - /*E*/ s_ok = rb_time_read(&cpu_buffer->before_stamp, &save_before); - RB_WARN_ON(cpu_buffer, !s_ok); + /* + * If something came in between C and D, the write stamp + * may now not be in sync. But that's fine as the before_stamp + * will be different and then next event will just be forced + * to use an absolute timestamp. + */ if (likely(!(info->add_timestamp & (RB_ADD_STAMP_FORCE | RB_ADD_STAMP_ABSOLUTE)))) /* This did not interrupt any time update */ @@ -3627,24 +3627,7 @@ __rb_reserve_next(struct ring_buffer_per_cpu *cpu_buffer, else /* Just use full timestamp for interrupting event */ info->delta = info->ts; - barrier(); check_buffer(cpu_buffer, info, tail); - if (unlikely(info->ts != save_before)) { - /* SLOW PATH - Interrupted between C and E */ - - a_ok = rb_time_read(&cpu_buffer->write_stamp, &info->after); - RB_WARN_ON(cpu_buffer, !a_ok); - - /* Write stamp must only go forward */ - if (save_before > info->after) { - /* - * We do not care about the result, only that - * it gets updated atomically. - */ - (void)rb_time_cmpxchg(&cpu_buffer->write_stamp, - info->after, save_before); - } - } } else { u64 ts; /* SLOW PATH - Interrupted between A and C */