perf: optimize perf_fetch_caller_regs
authorAlexei Starovoitov <ast@fb.com>
Thu, 7 Apr 2016 01:43:22 +0000 (18:43 -0700)
committerDavid S. Miller <davem@davemloft.net>
Fri, 8 Apr 2016 01:04:26 +0000 (21:04 -0400)
avoid memset in perf_fetch_caller_regs, since it's the critical path of all tracepoints.
It's called from perf_sw_event_sched, perf_event_task_sched_in and all of perf_trace_##call
with this_cpu_ptr(&__perf_regs[..]) which are zero initialized by perpcu init logic and
subsequent call to perf_arch_fetch_caller_regs initializes the same fields on all archs,
so we can safely drop memset from all of the above cases and move it into
perf_ftrace_function_call that calls it with stack allocated pt_regs.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
include/linux/perf_event.h
kernel/trace/trace_event_perf.c

index f291275ffd71730f39dcab3e1fd227110088325a..e89f7199c2239a2573d1fb22e7abba64b300fd5c 100644 (file)
@@ -882,8 +882,6 @@ static inline void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned lo
  */
 static inline void perf_fetch_caller_regs(struct pt_regs *regs)
 {
-       memset(regs, 0, sizeof(*regs));
-
        perf_arch_fetch_caller_regs(regs, CALLER_ADDR0);
 }
 
index 00df25fd86ef458b4ee23d645efda32426af2568..7a68afca8249e0f08abb6766a871939a9686cafc 100644 (file)
@@ -316,6 +316,7 @@ perf_ftrace_function_call(unsigned long ip, unsigned long parent_ip,
 
        BUILD_BUG_ON(ENTRY_SIZE > PERF_MAX_TRACE_SIZE);
 
+       memset(&regs, 0, sizeof(regs));
        perf_fetch_caller_regs(&regs);
 
        entry = perf_trace_buf_prepare(ENTRY_SIZE, TRACE_FN, NULL, &rctx);