perf: Add unlikely() to the ring-buffer code
authorPeter Zijlstra <peterz@infradead.org>
Thu, 31 Oct 2013 16:20:25 +0000 (17:20 +0100)
committerIngo Molnar <mingo@kernel.org>
Wed, 6 Nov 2013 11:34:19 +0000 (12:34 +0100)
Add unlikely() annotations to 'slow' paths:

When having a sampling event but no output buffer; you have bigger
issues -- also the bail is still faster than actually doing the work.

When having a sampling event but a control page only buffer, you have
bigger issues -- again the bail is still faster than actually doing
work.

Optimize for the case where you're not loosing events -- again, not
doing the work is still faster but make sure that when you have to
actually do work its as fast as possible.

The typical watermark is 1/2 the buffer size, so most events will not
take this path.

Shrinks perf_output_begin() by 16 bytes on x86_64-defconfig.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Michael Ellerman <michael@ellerman.id.au>
Cc: Michael Neuling <mikey@neuling.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: james.hogan@imgtec.com
Cc: Vince Weaver <vince@deater.net>
Cc: Victor Kaplansky <VICTORK@il.ibm.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Anton Blanchard <anton@samba.org>
Link: http://lkml.kernel.org/n/tip-wlg3jew3qnutm8opd0hyeuwn@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/events/ring_buffer.c

index 6929c5848d4ff5ad19a20d6804d809d45da82396..383cde47617636377ce7f0267259884557c29f2d 100644 (file)
@@ -121,17 +121,17 @@ int perf_output_begin(struct perf_output_handle *handle,
                event = event->parent;
 
        rb = rcu_dereference(event->rb);
-       if (!rb)
+       if (unlikely(!rb))
                goto out;
 
-       handle->rb      = rb;
-       handle->event   = event;
-
-       if (!rb->nr_pages)
+       if (unlikely(!rb->nr_pages))
                goto out;
 
+       handle->rb    = rb;
+       handle->event = event;
+
        have_lost = local_read(&rb->lost);
-       if (have_lost) {
+       if (unlikely(have_lost)) {
                lost_event.header.size = sizeof(lost_event);
                perf_event_header__init_id(&lost_event.header, &sample_data,
                                           event);
@@ -157,7 +157,7 @@ int perf_output_begin(struct perf_output_handle *handle,
                head += size;
        } while (local_cmpxchg(&rb->head, offset, head) != offset);
 
-       if (head - local_read(&rb->wakeup) > rb->watermark)
+       if (unlikely(head - local_read(&rb->wakeup) > rb->watermark))
                local_add(rb->watermark, &rb->wakeup);
 
        handle->page = offset >> (PAGE_SHIFT + page_order(rb));
@@ -167,7 +167,7 @@ int perf_output_begin(struct perf_output_handle *handle,
        handle->addr += handle->size;
        handle->size = (PAGE_SIZE << page_order(rb)) - handle->size;
 
-       if (have_lost) {
+       if (unlikely(have_lost)) {
                lost_event.header.type = PERF_RECORD_LOST;
                lost_event.header.misc = 0;
                lost_event.id          = event->id;