ring-buffer: Fix race while reader and writer are on the same page

Message ID 20230324125037.2719020-1-zhengyejian1@huawei.com
State New
Headers
Series ring-buffer: Fix race while reader and writer are on the same page |

Commit Message

Zheng Yejian March 24, 2023, 12:50 p.m. UTC
  When user reads file 'trace_pipe', kernel keeps printing following logs
that warn at "cpu_buffer->reader_page->read > rb_page_size(reader)".
It just looks like there's an infinite loop in tracing_read_pipe().
This problem occurs several times on arm64 when testing v5.10 and below.

  Call trace:
   rb_get_reader_page+0x248/0x1300
   rb_buffer_peek+0x34/0x160
   ring_buffer_peek+0xbc/0x224
   peek_next_entry+0x98/0xbc
   __find_next_entry+0xc4/0x1c0
   trace_find_next_entry_inc+0x30/0x94
   tracing_read_pipe+0x198/0x304
   vfs_read+0xb4/0x1e0
   ksys_read+0x74/0x100
   __arm64_sys_read+0x24/0x30
   el0_svc_common.constprop.0+0x7c/0x1bc
   do_el0_svc+0x2c/0x94
   el0_svc+0x20/0x30
   el0_sync_handler+0xb0/0xb4
   el0_sync+0x160/0x180

Then I dump the vmcore and look into the problematic per_cpu ring_buffer,
I found that tail_page/commit_page/reader_page are on the same page while
reader_page->read is obviously abnormal:
  tail_page == commit_page == reader_page == {
    .write = 0x100d20,
    .read = 0x8f9f4805,  // Far greater than 0xd20, obviously abnormal!!!
    .entries = 0x10004c,
    .real_end = 0x0,
    .page = {
      .time_stamp = 0x857257416af0,
      .commit = 0xd20,  // This page hasn't been full filled.
      // .data[0...0xd20] seems normal
    }
 }

The root cause is most likely the race that reader and writer are on the
same page while reader saw an event that not fully committed by writer.

To fix this, add memory barriers to make sure that reader can see every
completely committed event. Since commit a0fcaaed0c46 ("ring-buffer: Fix
race between reset page and reading page") has added the read barrier in
rb_get_reader_page(), here we just need to add the write barrier.

Fixes: 77ae365eca89 ("ring-buffer: make lockless")
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
---
 kernel/trace/ring_buffer.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)
  

Comments

Steven Rostedt March 24, 2023, 7:23 p.m. UTC | #1
On Fri, 24 Mar 2023 20:50:37 +0800
Zheng Yejian <zhengyejian1@huawei.com> wrote:

> Fixes: 77ae365eca89 ("ring-buffer: make lockless")
> Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
> ---
>  kernel/trace/ring_buffer.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index c6f47b6cfd5f..79fd5e10ee05 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -2942,6 +2942,13 @@ rb_update_event(struct ring_buffer_per_cpu *cpu_buffer,
>  		event->array[0] = length;
>  	} else
>  		event->type_len = DIV_ROUND_UP(length, RB_ALIGNMENT);
> +
> +	/*
> +	 * The 'event' may be reserved from the page which is reading
> +	 * by reader, make sure 'event' is completely updated before
> +	 * reader_page->page->commit being set.
> +	 */
> +	smp_wmb();

This isn't the place to put this. We only care before the commit is
updated, not at *ever* update to the event (this can be called several
times before a commit).

If we need to add a smp_wmb() it's best to be in rb_set_commit_to_write()

>  }
>  
>  static unsigned rb_calculate_event_length(unsigned length)
> @@ -4684,7 +4691,12 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
>  
>  	/*
>  	 * Make sure we see any padding after the write update
> -	 * (see rb_reset_tail())
> +	 * (see rb_reset_tail()).
> +	 *
> +	 * In addition, writer may be writing on the reader page
> +	 * if the page has not been fully filled, so the read barrier
> +	 * is also needed to make sure we see the completely updated
> +	 * event that reserved by writer (see rb_update_event()).
>  	 */
>  	smp_rmb();
>  

I think we want this instead:

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 2d5c3caff32d..22d05cd04a3a 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3092,6 +3092,10 @@ rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
 		if (RB_WARN_ON(cpu_buffer,
 			       rb_is_reader_page(cpu_buffer->tail_page)))
 			return;
+		/*
+		 * No need for a memory barrier here, as the update
+		 * of the tail_page did it for this page.
+		 */
 		local_set(&cpu_buffer->commit_page->page->commit,
 			  rb_page_write(cpu_buffer->commit_page));
 		rb_inc_page(&cpu_buffer->commit_page);
@@ -3101,6 +3105,8 @@ rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
 	while (rb_commit_index(cpu_buffer) !=
 	       rb_page_write(cpu_buffer->commit_page)) {
 
+		/* Make sure the readers see the content of what is committed. */
+		smp_wmb();
 		local_set(&cpu_buffer->commit_page->page->commit,
 			  rb_page_write(cpu_buffer->commit_page));
 		RB_WARN_ON(cpu_buffer,
@@ -4676,7 +4682,12 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
 
 	/*
 	 * Make sure we see any padding after the write update
-	 * (see rb_reset_tail())
+	 * (see rb_reset_tail()).
+	 *
+	 * In addition, a writer may be writing on the reader page
+	 * if the page has not been fully filled, so the read barrier
+	 * is also needed to make sure we see the completely updated
+	 * event reserved by the writer (see rb_tail_page_update()).
 	 */
 	smp_rmb();
  

Patch

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index c6f47b6cfd5f..79fd5e10ee05 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2942,6 +2942,13 @@  rb_update_event(struct ring_buffer_per_cpu *cpu_buffer,
 		event->array[0] = length;
 	} else
 		event->type_len = DIV_ROUND_UP(length, RB_ALIGNMENT);
+
+	/*
+	 * The 'event' may be reserved from the page which is reading
+	 * by reader, make sure 'event' is completely updated before
+	 * reader_page->page->commit being set.
+	 */
+	smp_wmb();
 }
 
 static unsigned rb_calculate_event_length(unsigned length)
@@ -4684,7 +4691,12 @@  rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
 
 	/*
 	 * Make sure we see any padding after the write update
-	 * (see rb_reset_tail())
+	 * (see rb_reset_tail()).
+	 *
+	 * In addition, writer may be writing on the reader page
+	 * if the page has not been fully filled, so the read barrier
+	 * is also needed to make sure we see the completely updated
+	 * event that reserved by writer (see rb_update_event()).
 	 */
 	smp_rmb();