tracing: Fix bug in rb_per_cpu_empty() that might cause deadloop.
authorHaoran Luo <www@aegistudio.net>
Wed, 21 Jul 2021 14:12:07 +0000 (14:12 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 28 Jul 2021 07:14:28 +0000 (09:14 +0200)
commit 67f0d6d9883c13174669f88adac4f0ee656cc16a upstream.

The "rb_per_cpu_empty()" misinterpret the condition (as not-empty) when
"head_page" and "commit_page" of "struct ring_buffer_per_cpu" points to
the same buffer page, whose "buffer_data_page" is empty and "read" field
is non-zero.

An error scenario could be constructed as followed (kernel perspective):

1. All pages in the buffer has been accessed by reader(s) so that all of
them will have non-zero "read" field.

2. Read and clear all buffer pages so that "rb_num_of_entries()" will
return 0 rendering there's no more data to read. It is also required
that the "read_page", "commit_page" and "tail_page" points to the same
page, while "head_page" is the next page of them.

3. Invoke "ring_buffer_lock_reserve()" with large enough "length"
so that it shot pass the end of current tail buffer page. Now the
"head_page", "commit_page" and "tail_page" points to the same page.

4. Discard current event with "ring_buffer_discard_commit()", so that
"head_page", "commit_page" and "tail_page" points to a page whose buffer
data page is now empty.

When the error scenario has been constructed, "tracing_read_pipe" will
be trapped inside a deadloop: "trace_empty()" returns 0 since
"rb_per_cpu_empty()" returns 0 when it hits the CPU containing such
constructed ring buffer. Then "trace_find_next_entry_inc()" always
return NULL since "rb_num_of_entries()" reports there's no more entry
to read. Finally "trace_seq_to_user()" returns "-EBUSY" spanking
"tracing_read_pipe" back to the start of the "waitagain" loop.

I've also written a proof-of-concept script to construct the scenario
and trigger the bug automatically, you can use it to trace and validate
my reasoning above:

  https://github.com/aegistudio/RingBufferDetonator.git

Tests has been carried out on linux kernel 5.14-rc2
(2734d6c1b1a089fb593ef6a23d4b70903526fe0c), my fixed version
of kernel (for testing whether my update fixes the bug) and
some older kernels (for range of affected kernels). Test result is
also attached to the proof-of-concept repository.

Link: https://lore.kernel.org/linux-trace-devel/YPaNxsIlb2yjSi5Y@aegistudio/
Link: https://lore.kernel.org/linux-trace-devel/YPgrN85WL9VyrZ55@aegistudio
Cc: stable@vger.kernel.org
Fixes: bf41a158cacba ("ring-buffer: make reentrant")
Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org>
Signed-off-by: Haoran Luo <www@aegistudio.net>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kernel/trace/ring_buffer.c

index 4ec8bf634f89f3f0938711b274175d9b2c8e9102..c30a70ab0e7ae57ab118d23a2711173f706ca577 100644 (file)
@@ -3081,10 +3081,30 @@ static bool rb_per_cpu_empty(struct ring_buffer_per_cpu *cpu_buffer)
        if (unlikely(!head))
                return true;
 
-       return reader->read == rb_page_commit(reader) &&
-               (commit == reader ||
-                (commit == head &&
-                 head->read == rb_page_commit(commit)));
+       /* Reader should exhaust content in reader page */
+       if (reader->read != rb_page_commit(reader))
+               return false;
+
+       /*
+        * If writers are committing on the reader page, knowing all
+        * committed content has been read, the ring buffer is empty.
+        */
+       if (commit == reader)
+               return true;
+
+       /*
+        * If writers are committing on a page other than reader page
+        * and head page, there should always be content to read.
+        */
+       if (commit != head)
+               return false;
+
+       /*
+        * Writers are committing on the head page, we just need
+        * to care about there're committed data, and the reader will
+        * swap reader page with head page when it is to read data.
+        */
+       return rb_page_commit(commit) == 0;
 }
 
 /**