Message ID | 20231221161104.201002261@goodmis.org |
---|---|
State | New |
Headers |
Return-Path: <linux-kernel+bounces-8732-ouuuleilei=gmail.com@vger.kernel.org> Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp517394dyi; Thu, 21 Dec 2023 08:12:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IFLk6wuX2ohVRQcsgW/c6vRV1VOOeh5LZtmRKOSiq1meRSG0upEiJBxsi1VOvBW6wsW3898 X-Received: by 2002:a05:6512:75:b0:50e:242a:775e with SMTP id i21-20020a056512007500b0050e242a775emr4458340lfo.106.1703175159542; Thu, 21 Dec 2023 08:12:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703175159; cv=none; d=google.com; s=arc-20160816; b=gTbEJrtd0ygBpSlTOhPNefQIG4hBGEyJRHTKYckmi13/O0UeDxxuVhjFcPPGt4dBbO X93PNlHVQQH0SJij8i2FWLLmIUhDlDpI9g4MH+QgVDYBaId9OCEIn3oR2VKUfwA45iNB X2brG6OA/5H3pYWQophPNMYoD9rI7qlBf7YQ1bMJnSsRc/CztaGqQJVSwAATEACYmC1j 1GeAdRYkIetHhKmuuqeUVwthFqrW24cXt0JpjDpTMnsXk2Kz3Yznpyqk3dlzR51Ol/tf yv+tIjGe0xnGdcDGOBjd7BK56NP9Bi6OpjK5v+SorOqqEX+X9tx/+4H6NgtGHOYaR+u+ mCXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:list-unsubscribe:list-subscribe:list-id:precedence :references:subject:cc:to:from:date:user-agent:message-id; bh=4WihiHXInPnGl8lay5hJsHJZq9W0Uab5DK+n0yUp+aI=; fh=e3UoLP0GGIVG/JuDivUt2G6X10h9sIOfzt1IJTPiLuw=; b=P5K3o24KsToqHtzXK0w8DYRKhe3PVOaASHgA5gPjz2st8ZMb1xzx25gcMyfcjNNpZq tr8dgBXSTRnC4dJznki/mSqYlmzEW2+TgY74WJnEAGt6qr6VRYGsvV564Rpw3kCSoylY HOpyBQfJOANSFI6DTE48uCc8lDsdbLye+PTXnBPCv2K0XC2Fep8RWo1fxghS5SY/KP0r tMwl55lvLqHpxM9Vpj2MJQpsqoCfKE376LUYeIHw+Xh058VhWp+tj3U5FVXwGIb4f/R/ i6RPylbFxBk9mhnhh4O304gBwKw3x6K962xFL2czsFULAmWYyWDjo/p9Fp7oF6TVgxHT hClQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-8732-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8732-ouuuleilei=gmail.com@vger.kernel.org" Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id a24-20020a1709065f9800b00a22e6dac442si997175eju.632.2023.12.21.08.12.39 for <ouuuleilei@gmail.com> (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 08:12:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-8732-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-8732-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-8732-ouuuleilei=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 12C8E1F2495E for <ouuuleilei@gmail.com>; Thu, 21 Dec 2023 16:12:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BC65664AB8; Thu, 21 Dec 2023 16:10:02 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02E335992C for <linux-kernel@vger.kernel.org>; Thu, 21 Dec 2023 16:10:00 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76B3CC116B3; Thu, 21 Dec 2023 16:10:00 +0000 (UTC) Received: from rostedt by gandalf with local (Exim 4.97) (envelope-from <rostedt@goodmis.org>) id 1rGLdY-000000041JZ-1nnd; Thu, 21 Dec 2023 11:11:04 -0500 Message-ID: <20231221161104.201002261@goodmis.org> User-Agent: quilt/0.67 Date: Thu, 21 Dec 2023 11:10:31 -0500 From: Steven Rostedt <rostedt@goodmis.org> To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org>, Mark Rutland <mark.rutland@arm.com>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, Andrew Morton <akpm@linux-foundation.org>, Tzvetomir Stoyanov <tz.stoyanov@gmail.com>, Vincent Donnefort <vdonnefort@google.com>, Kent Overstreet <kent.overstreet@gmail.com> Subject: [for-next][PATCH 07/16] ring-buffer: Do no swap cpu buffers if order is different References: <20231221161024.478795180@goodmis.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: <linux-kernel.vger.kernel.org> List-Subscribe: <mailto:linux-kernel+subscribe@vger.kernel.org> List-Unsubscribe: <mailto:linux-kernel+unsubscribe@vger.kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1785908596196962186 X-GMAIL-MSGID: 1785908596196962186 |
Series |
tracing: Add dynamic sub-buffer size for 6.8
|
|
Commit Message
Steven Rostedt
Dec. 21, 2023, 4:10 p.m. UTC
From: "Steven Rostedt (Google)" <rostedt@goodmis.org> As all the subbuffer order (subbuffer sizes) must be the same throughout the ring buffer, check the order of the buffers that are doing a CPU buffer swap in ring_buffer_swap_cpu() to make sure they are the same. If the are not the same, then fail to do the swap, otherwise the ring buffer will think the CPU buffer has a specific subbuffer size when it does not. Link: https://lore.kernel.org/linux-trace-kernel/20231219185629.467894710@goodmis.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Tzvetomir Stoyanov <tz.stoyanov@gmail.com> Cc: Vincent Donnefort <vdonnefort@google.com> Cc: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org> --- kernel/trace/ring_buffer.c | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 3c11e8e811ed..fdcd171b09b5 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -5417,6 +5417,9 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a, if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages) goto out; + if (buffer_a->subbuf_order != buffer_b->subbuf_order) + goto out; + ret = -EAGAIN; if (atomic_read(&buffer_a->record_disabled))