From patchwork Sun Nov 20 20:07:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Rostedt X-Patchwork-Id: 23454 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:adf:f944:0:0:0:0:0 with SMTP id q4csp1250576wrr; Sun, 20 Nov 2022 12:09:11 -0800 (PST) X-Google-Smtp-Source: AA0mqf6L0DFvdP1ay5UPlKR77MkikC8fsAtZA1p8i1CqNPy8lgVw6xJtXJPayksnuGOjXDSmoYuL X-Received: by 2002:a63:f455:0:b0:476:e84b:ce4c with SMTP id p21-20020a63f455000000b00476e84bce4cmr260124pgk.171.1668974950742; Sun, 20 Nov 2022 12:09:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668974950; cv=none; d=google.com; s=arc-20160816; b=Ysao1+MF7inH+EvzUVtRCTAL2WXy89Q697jW8lRzHLfHhR0bCgE9UB3NlT1GMVUkSf EGtV4lHZ0S6JkVWrl4+/a/4UdvURhqwwEBi2+hItuF4Jw0sJ1kATW4GFKnbHzKQV7EmM wvUFZX8f6uBSvdqreiY4TzhYFudr9CeEWKU5gCKZd2yUA7UWaX0QNwP5nz3qqci96vZO 04fnk5E6OxN0QzbfWTLmZd+3xyEvT3cLU/3413gfX1rCYP2LUk/80I59FGLNSx/zAwts gVHib0Fj6u1KRnFVEPGXOil3dqhHF7pDEUJ1vFYEoLWS0/ck6KLNEjCFXtAWVe3ZbZSi Ezqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:subject:cc:to:from:date :user-agent:message-id; bh=PYmz2cx0C56N14MZJCozoGU57ltThDGIh5DZpfpp3eg=; b=v24YfkWYSF3ZhUowN9nwNKVSiDmZbY75E/qmE5PqhAkHRBE1vJr61CBCnQ4IZl2jmi gcX+uSeSNc3lrAgkpOZDswsf7g5qPjGmlKUNNR6nQS19ffgETAaC4vOrgIdFx2rlOb75 5QmJJzyeJ3XzXDAZJmhtw6CLOlqeO6q+Tovy15/a0AsSrGv07CnZbjXlfA/DyHv0YeQ8 C2tA90e2HBslGQZozIwGBS0tsljTcrukA7BDPOO9gC2BCg+L3E3/JcNFWSnylXeGRjGM 8zmC09qVwplWwXx/6uKF2xRQFLITmfM0psXZJ5zbfm50r4Fi6Kfr64IufToYbLNUb2Oy tTsQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u10-20020a170902e5ca00b0016f5e7d0febsi9836451plf.244.2022.11.20.12.08.57; Sun, 20 Nov 2022 12:09:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229912AbiKTUIS (ORCPT + 99 others); Sun, 20 Nov 2022 15:08:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229758AbiKTUHl (ORCPT ); Sun, 20 Nov 2022 15:07:41 -0500 Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A55A01CFC0 for ; Sun, 20 Nov 2022 12:07:38 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sin.source.kernel.org (Postfix) with ESMTPS id 78A86CE0E5C for ; Sun, 20 Nov 2022 20:07:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 817A3C4347C; Sun, 20 Nov 2022 20:07:34 +0000 (UTC) Received: from rostedt by gandalf.local.home with local (Exim 4.96) (envelope-from ) id 1owqbF-00Di4H-22; Sun, 20 Nov 2022 15:07:33 -0500 Message-ID: <20221120200733.488392212@goodmis.org> User-Agent: quilt/0.66 Date: Sun, 20 Nov 2022 15:07:02 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org Cc: Masami Hiramatsu , Andrew Morton Subject: [for-linus][PATCH 02/13] ring-buffer: Include dropped pages in counting dirty patches References: <20221120200700.725968899@goodmis.org> MIME-Version: 1.0 X-Spam-Status: No, score=-6.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_HI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1750047077748481490?= X-GMAIL-MSGID: =?utf-8?q?1750047077748481490?= From: "Steven Rostedt (Google)" The function ring_buffer_nr_dirty_pages() was created to find out how many pages are filled in the ring buffer. There's two running counters. One is incremented whenever a new page is touched (pages_touched) and the other is whenever a page is read (pages_read). The dirty count is the number touched minus the number read. This is used to determine if a blocked task should be woken up if the percentage of the ring buffer it is waiting for is hit. The problem is that it does not take into account dropped pages (when the new writes overwrite pages that were not read). And then the dirty pages will always be greater than the percentage. This makes the "buffer_percent" file inaccurate, as the number of dirty pages end up always being larger than the percentage, event when it's not and this causes user space to be woken up more than it wants to be. Add a new counter to keep track of lost pages, and include that in the accounting of dirty pages so that it is actually accurate. Link: https://lkml.kernel.org/r/20221021123013.55fb6055@gandalf.local.home Fixes: 2c2b0a78b3739 ("ring-buffer: Add percentage of ring buffer full to wake up reader") Signed-off-by: Steven Rostedt (Google) --- kernel/trace/ring_buffer.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 089b1ec9cb3b..a19369c4d8df 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -519,6 +519,7 @@ struct ring_buffer_per_cpu { local_t committing; local_t commits; local_t pages_touched; + local_t pages_lost; local_t pages_read; long last_pages_touch; size_t shortest_full; @@ -894,10 +895,18 @@ size_t ring_buffer_nr_pages(struct trace_buffer *buffer, int cpu) size_t ring_buffer_nr_dirty_pages(struct trace_buffer *buffer, int cpu) { size_t read; + size_t lost; size_t cnt; read = local_read(&buffer->buffers[cpu]->pages_read); + lost = local_read(&buffer->buffers[cpu]->pages_lost); cnt = local_read(&buffer->buffers[cpu]->pages_touched); + + if (WARN_ON_ONCE(cnt < lost)) + return 0; + + cnt -= lost; + /* The reader can read an empty page, but not more than that */ if (cnt < read) { WARN_ON_ONCE(read > cnt + 1); @@ -2031,6 +2040,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages) */ local_add(page_entries, &cpu_buffer->overrun); local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); + local_inc(&cpu_buffer->pages_lost); } /* @@ -2515,6 +2525,7 @@ rb_handle_head_page(struct ring_buffer_per_cpu *cpu_buffer, */ local_add(entries, &cpu_buffer->overrun); local_sub(BUF_PAGE_SIZE, &cpu_buffer->entries_bytes); + local_inc(&cpu_buffer->pages_lost); /* * The entries will be zeroed out when we move the @@ -5265,6 +5276,7 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) local_set(&cpu_buffer->committing, 0); local_set(&cpu_buffer->commits, 0); local_set(&cpu_buffer->pages_touched, 0); + local_set(&cpu_buffer->pages_lost, 0); local_set(&cpu_buffer->pages_read, 0); cpu_buffer->last_pages_touch = 0; cpu_buffer->shortest_full = 0;