GitHub/LineageOS/android_kernel_motorola_exynos9610.git
13 years agotracing: Have stack tracing set filtered functions at boot
Steven Rostedt [Tue, 20 Dec 2011 03:01:00 +0000 (22:01 -0500)]
tracing: Have stack tracing set filtered functions at boot

Add stacktrace_filter= to the kernel command line that lets
the user pick specific functions to check the stack on.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Allow access to the boot time function enabling
Steven Rostedt [Tue, 20 Dec 2011 02:57:44 +0000 (21:57 -0500)]
ftrace: Allow access to the boot time function enabling

Change set_ftrace_early_filter() to ftrace_set_early_filter()
and make it a global function. This will allow other subsystems
in the kernel to be able to enable function tracing at start
up and reuse the ftrace function parsing code.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agotracing: Have stack_tracer use a separate list of functions
Steven Rostedt [Mon, 19 Dec 2011 19:44:09 +0000 (14:44 -0500)]
tracing: Have stack_tracer use a separate list of functions

The stack_tracer is used to look at every function and check
if the current stack is bigger than the last recorded max stack size.
When a new max is found, then it saves that stack off.

Currently the stack tracer is limited by the global_ops of
the function tracer. As the stack tracer has nothing to do with
the ftrace function tracer, except that it uses it as its internal
engine, the stack tracer should have its own list.

A new file is added to the tracing debugfs directory called:

  stack_trace_filter

that can be used to select which functions you want to check the stack
on.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Decouple hash items from showing filtered functions
Steven Rostedt [Mon, 19 Dec 2011 20:21:16 +0000 (15:21 -0500)]
ftrace: Decouple hash items from showing filtered functions

The set_ftrace_filter shows "hashed" functions, which are functions
that are added with operations to them (like traceon and traceoff).

As other subsystems may be able to show what functions they are
using for function tracing, the hash items should no longer
be shown just because the FILTER flag is set. As they have nothing
to do with other subsystems filters.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Allow other users of function tracing to use the output listing
Steven Rostedt [Mon, 19 Dec 2011 19:41:25 +0000 (14:41 -0500)]
ftrace: Allow other users of function tracing to use the output listing

The function tracer is set up to allow any other subsystem (like perf)
to use it. Ftrace already has a way to list what functions are enabled
by the global_ops. It would be very helpful to let other users of
the function tracer to be able to use the same code.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Create ftrace_hash_empty() helper routine
Steven Rostedt [Tue, 20 Dec 2011 00:07:36 +0000 (19:07 -0500)]
ftrace: Create ftrace_hash_empty() helper routine

There are two types of hashes in the ftrace_ops; one type
is the filter_hash and the other is the notrace_hash. Either
one may be null, meaning it has no elements. But when elements
are added, the hash is allocated.

Throughout the code, a check needs to be made to see if a hash
exists or the hash has elements, but the check if the hash exists
is usually missing causing the possible "NULL pointer dereference bug".

Add a helper routine called "ftrace_hash_empty()" that returns
true if the hash doesn't exist or its count is zero. As they mean
the same thing.

Last-bug-reported-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Fix ftrace hash record update with notrace
Steven Rostedt [Mon, 19 Dec 2011 23:44:44 +0000 (18:44 -0500)]
ftrace: Fix ftrace hash record update with notrace

When disabling the "notrace" records, that means we want to trace them.
If the notrace_hash is zero, it means that we want to trace all
records. But to disable a zero notrace_hash means nothing.

The check for the notrace_hash count was incorrect with:

if (hash && !hash->count)
return

With the correct comment above it that states that we do nothing
if the notrace_hash has zero count. But !hash also means that
the notrace hash has zero count. I think this was done to
protect against dereferencing NULL. But if !hash is true, then
we go through the following loop without doing a single thing.

Fix it to:

if (!hash || !hash->count)
return;

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Use bsearch to find record ip
Steven Rostedt [Sat, 17 Dec 2011 00:27:42 +0000 (19:27 -0500)]
ftrace: Use bsearch to find record ip

Now that each set of pages in the function list are sorted by
ip, we can use bsearch to find a record within each set of pages.
This speeds up the ftrace_location() function by magnitudes.

For archs (like x86) that need to add a breakpoint at every function
that will be converted from a nop to a callback and vice versa,
the breakpoint callback needs to know if the breakpoint was for
ftrace or not. It requires finding the breakpoint ip within the
records. Doing a linear search is extremely inefficient. It is
a must to be able to do a fast binary search to find these locations.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Sort the mcount records on each page
Steven Rostedt [Fri, 16 Dec 2011 22:06:45 +0000 (17:06 -0500)]
ftrace: Sort the mcount records on each page

Sort records by ip locations of the ftrace mcount calls on each of the
set of pages in the function list. This helps in localizing cache
usuage when updating the function locations, as well as gives us
the ability to quickly find an ip location in the list.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Replace record newlist with record page list
Steven Rostedt [Fri, 16 Dec 2011 21:30:31 +0000 (16:30 -0500)]
ftrace: Replace record newlist with record page list

As new functions come in to be initalized from mcount to nop,
they are done by groups of pages. Whether it is the core kernel
or a module. There's no need to keep track of these on a per record
basis.

At startup, and as any module is loaded, the functions to be
traced are stored in a group of pages and added to the function
list at the end. We just need to keep a pointer to the first
page of the list that was added, and use that to know where to
start on the list for initializing functions.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Allocate the mcount record pages as groups
Steven Rostedt [Fri, 16 Dec 2011 21:23:44 +0000 (16:23 -0500)]
ftrace: Allocate the mcount record pages as groups

Allocate the mcount record pages as a group of pages as big
as can be allocated and waste no more than a single page.

Grouping the mcount pages as much as possible helps with cache
locality, as we do not need to redirect with descriptors as we
cross from page to page. It also allows us to do more with the
records later on (sort them with bigger benefits).

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Remove usage of "freed" records
Steven Rostedt [Fri, 16 Dec 2011 19:42:37 +0000 (14:42 -0500)]
ftrace: Remove usage of "freed" records

Records that are added to the function trace table are
permanently there, except for modules. By separating out the
modules to their own pages that can be freed in one shot
we can remove the "freed" flag and simplify some of the record
management.

Another benefit of doing this is that we can also move the
records around; sort them.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Allow archs to modify code without stop machine
Steven Rostedt [Tue, 16 Aug 2011 13:53:39 +0000 (09:53 -0400)]
ftrace: Allow archs to modify code without stop machine

The stop machine method to modify all functions in the kernel
(some 20,000 of them) is the safest way to do so across all archs.
But some archs may not need this big hammer approach to modify code
on SMP machines, and can simply just update the code it needs.

Adding a weak function arch_ftrace_update_code() that now does the
stop machine, will also let any arch override this method.

If the arch needs to check the system and then decide if it can
avoid stop machine, it can still call ftrace_run_stop_machine() to
use the old method.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Do not function trace inlined functions
Steven Rostedt [Mon, 12 Dec 2011 20:22:41 +0000 (15:22 -0500)]
ftrace: Do not function trace inlined functions

When gcc inlines a function, it does not mark it with the mcount
prologue, which in turn means that inlined functions are not traced
by the function tracer. But if CONFIG_OPTIMIZE_INLINING is set, then
gcc is allowed not to inline a function that is marked inline.

Depending on the options and the compiler, a function may or may
not be traced by the function tracer, depending on whether gcc
decides to inline a function or not. This has caused several
problems in the pass becaues gcc is not always consistent with
what it decides to inline between different gcc versions.

Some places should not be traced (like paravirt native_* functions)
and these are mostly marked as inline. When gcc decides not to
inline the function, and if that function should not be traced, then
the ftrace function tracer will suddenly break when it use to work
fine. This becomes even harder to debug when different versions of
gcc will not inline that function, making the same kernel and config
work for some gcc versions and not work for others.

By making all functions marked inline to not be traced will remove
the ambiguity that gcc adds when it comes to tracing functions marked
inline. All gcc versions will be consistent with what functions are
traced and having volatile working code will be removed.

Note, only the inline macro when CONFIG_OPTIMIZE_INLINING is set needs
to have notrace added, as the attribute __always_inline will force
the function to be inlined and then not traced.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Fix unregister ftrace_ops accounting
Jiri Olsa [Mon, 5 Dec 2011 17:22:48 +0000 (18:22 +0100)]
ftrace: Fix unregister ftrace_ops accounting

Multiple users of the function tracer can register their functions
with the ftrace_ops structure. The accounting within ftrace will
update the counter on each function record that is being traced.
When the ftrace_ops filtering adds or removes functions, the
function records will be updated accordingly if the ftrace_ops is
still registered.

When a ftrace_ops is removed, the counter of the function records,
that the ftrace_ops traces, are decremented. When they reach zero
the functions that they represent are modified to stop calling the
mcount code.

When changes are made, the code is updated via stop_machine() with
a command passed to the function to tell it what to do. There is an
ENABLE and DISABLE command that tells the called function to enable
or disable the functions. But the ENABLE is really a misnomer as it
should just update the records, as records that have been enabled
and now have a count of zero should be disabled.

The DISABLE command is used to disable all functions regardless of
their counter values. This is the big off switch and is not the
complement of the ENABLE command.

To make matters worse, when a ftrace_ops is unregistered and there
is another ftrace_ops registered, neither the DISABLE nor the
ENABLE command are set when calling into the stop_machine() function
and the records will not be updated to match their counter. A command
is passed to that function that will update the mcount code to call
the registered callback directly if it is the only one left. This
means that the ftrace_ops that is still registered will have its callback
called by all functions that have been set for it as well as the ftrace_ops
that was just unregistered.

Here's a way to trigger this bug. Compile the kernel with
CONFIG_FUNCTION_PROFILER set and with CONFIG_FUNCTION_GRAPH not set:

 CONFIG_FUNCTION_PROFILER=y
 # CONFIG_FUNCTION_GRAPH is not set

This will force the function profiler to use the function tracer instead
of the function graph tracer.

  # cd /sys/kernel/debug/tracing
  # echo schedule > set_ftrace_filter
  # echo function > current_tracer
  # cat set_ftrace_filter
 schedule
  # cat trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 692/68108025   #P:4
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
      kworker/0:2-909   [000] ....   531.235574: schedule <-worker_thread
           <idle>-0     [001] .N..   531.235575: schedule <-cpu_idle
      kworker/0:2-909   [000] ....   531.235597: schedule <-worker_thread
             sshd-2563  [001] ....   531.235647: schedule <-schedule_hrtimeout_range_clock

  # echo 1 > function_profile_enabled
  # echo 0 > function_porfile_enabled
  # cat set_ftrace_filter
 schedule
  # cat trace
 # tracer: function
 #
 # entries-in-buffer/entries-written: 159701/118821262   #P:4
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
           <idle>-0     [002] ...1   604.870655: local_touch_nmi <-cpu_idle
           <idle>-0     [002] d..1   604.870655: enter_idle <-cpu_idle
           <idle>-0     [002] d..1   604.870656: atomic_notifier_call_chain <-enter_idle
           <idle>-0     [002] d..1   604.870656: __atomic_notifier_call_chain <-atomic_notifier_call_chain

The same problem could have happened with the trace_probe_ops,
but they are modified with the set_frace_filter file which does the
update at closure of the file.

The simple solution is to change ENABLE to UPDATE and call it every
time an ftrace_ops is unregistered.

Link: http://lkml.kernel.org/r/1323105776-26961-3-git-send-email-jolsa@redhat.com
Cc: stable@vger.kernel.org # 3.0+
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoperf tools: Add ability to synthesize event according to a sample
Andrew Vagin [Mon, 28 Nov 2011 09:03:31 +0000 (12:03 +0300)]
perf tools: Add ability to synthesize event according to a sample

It's the counterpart of perf_session__parse_sample.

v2: fixed mistakes found by David Ahern.
v3: s/data/sample/
    s/perf_event__change_sample/perf_event__synthesize_sample

Reviewed-by: David Ahern <dsahern@gmail.com>
Cc: Arun Sharma <asharma@fb.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: devel@openvz.org
Link: http://lkml.kernel.org/r/1323266161-394927-3-git-send-email-avagin@openvz.org
Signed-off-by: Andrew Vagin <avagin@openvz.org>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf script: Implement option for system-wide profiling
Robert Richter [Fri, 25 Nov 2011 14:05:25 +0000 (15:05 +0100)]
perf script: Implement option for system-wide profiling

The option is documented in man perf-script but was not yet implemented:

       -a
           Force system-wide collection. Scripts run without a
           <command> normally use -a by default, while scripts run
           with a <command> normally don't - this option allows the
           latter to be run in system-wide mode.

As with perf record you now can profile in system-wide mode for the
runtime of a given command, e.g.:

 # perf script -a syscall-counts sleep 2

Cc: Ingo Molnar <mingo@elte.hu>
Link: http://lkml.kernel.org/r/1322229925-10075-1-git-send-email-robert.richter@amd.com
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf script: Fix mem leaks and NULL pointer checks around strdup()s
Robert Richter [Fri, 25 Nov 2011 10:38:40 +0000 (11:38 +0100)]
perf script: Fix mem leaks and NULL pointer checks around strdup()s

Fix mem leaks and missing NULL pointer checks after strdup().

And get_script_path() did not free __script_root in case of continue.

Introduce a helper function get_script_root().

Cc: Ingo Molnar <mingo@elte.hu>
Link: http://lkml.kernel.org/r/1322217520-3287-1-git-send-email-robert.richter@amd.com
Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf, x86: Expose perf capability to other modules
Gleb Natapov [Thu, 10 Nov 2011 12:57:27 +0000 (14:57 +0200)]
perf, x86: Expose perf capability to other modules

KVM needs to know perf capability to decide which PMU it can expose to a
guest.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1320929850-10480-8-git-send-email-gleb@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf, x86: Implement arch event mask as quirk
Peter Zijlstra [Tue, 6 Dec 2011 13:07:15 +0000 (14:07 +0100)]
perf, x86: Implement arch event mask as quirk

Implement the disabling of arch events as a quirk so that we can print
a message along with it. This creates some visibility into the problem
space and could allow us to work on adding more work-around like the
AAJ80 one.

Requested-by: Ingo Molnar <mingo@elte.hu>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-wcja2z48wklzu1b0nkz0a5y7@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agox86, perf: Disable non available architectural events
Gleb Natapov [Thu, 10 Nov 2011 12:57:26 +0000 (14:57 +0200)]
x86, perf: Disable non available architectural events

Intel CPUs report non-available architectural events in cpuid leaf
0AH.EBX. Use it to disable events that are not available according
to CPU.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1320929850-10480-7-git-send-email-gleb@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agojump_label: Provide jump_label_key initializers
Peter Zijlstra [Wed, 6 Jul 2011 12:20:14 +0000 (14:20 +0200)]
jump_label: Provide jump_label_key initializers

Provide two initializers for jump_label_key that initialize it enabled
or disabled. Also modify all jump_label code to allow for jump_labels to be
initialized enabled.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Jason Baron <jbaron@redhat.com>
Link: http://lkml.kernel.org/n/tip-p40e3yj21b68y03z1yv825e7@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agojump_label, x86: Fix section mismatch
Peter Zijlstra [Tue, 6 Dec 2011 16:27:29 +0000 (17:27 +0100)]
jump_label, x86: Fix section mismatch

WARNING: arch/x86/kernel/built-in.o(.text+0x4c71): Section mismatch in
reference from the function arch_jump_label_transform_static() to the
function .init.text:text_poke_early()
The function arch_jump_label_transform_static() references
the function __init text_poke_early().
This is often because arch_jump_label_transform_static lacks a __init
annotation or the annotation of text_poke_early is wrong.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Jason Baron <jbaron@redhat.com>
Link: http://lkml.kernel.org/n/tip-9lefe89mrvurrwpqw5h8xm8z@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoMerge branch 'tip/perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt...
Ingo Molnar [Tue, 6 Dec 2011 18:09:15 +0000 (19:09 +0100)]
Merge branch 'tip/perf/core' of git://git./linux/kernel/git/rostedt/linux-trace into perf/core

13 years agoperf, core: Rate limit perf_sched_events jump_label patching
Gleb Natapov [Sun, 27 Nov 2011 15:59:09 +0000 (17:59 +0200)]
perf, core: Rate limit perf_sched_events jump_label patching

jump_lable patching is very expensive operation that involves pausing all
cpus. The patching of perf_sched_events jump_label is easily controllable
from userspace by unprivileged user.

When te user runs a loop like this:

  "while true; do perf stat -e cycles true; done"

... the performance of my test application that just increments a counter
for one second drops by 4%.

This is on a 16 cpu box with my test application using only one of
them. An impact on a real server doing real work will be worse.

Performance of KVM PMU drops nearly 50% due to jump_lable for "perf
record" since KVM PMU implementation creates and destroys perf event
frequently.

This patch introduces a way to rate limit jump_label patching and uses
it to fix the above problem.

I believe that as jump_label use will spread the problem will become more
common and thus solving it in a generic code is appropriate. Also fixing
it in the perf code would result in moving jump_label accounting logic to
perf code with all the ifdefs in case of JUMP_LABEL=n kernel. With this
patch all details are nicely hidden inside jump_label code.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Acked-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20111127155909.GO2557@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf: Fix enable_on_exec for sibling events
Peter Zijlstra [Tue, 22 Nov 2011 10:25:43 +0000 (11:25 +0100)]
perf: Fix enable_on_exec for sibling events

Deng-Cheng Zhu reported that sibling events that were created disabled
with enable_on_exec would never get enabled. Iterate all events
instead of the group lists.

Reported-by: Deng-Cheng Zhu <dczhu@mips.com>
Tested-by: Deng-Cheng Zhu <dczhu@mips.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1322048382.14799.41.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf: Remove superfluous arguments
Peter Zijlstra [Wed, 23 Nov 2011 11:34:20 +0000 (12:34 +0100)]
perf: Remove superfluous arguments

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-yv4o74vh90suyghccgykbnry@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf, x86: Prefer fixed-purpose counters when scheduling
Peter Zijlstra [Thu, 10 Nov 2011 14:15:42 +0000 (15:15 +0100)]
perf, x86: Prefer fixed-purpose counters when scheduling

This avoids a scheduling failure for cases like:

  cycles, cycles, instructions, instructions (on Core2)

Which would end up being programmed like:

  PMC0, PMC1, FP-instructions, fail

Because all events will have the same weight.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-8tnwb92asqj7xajqqoty4gel@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf, x86: Fix event scheduler for constraints with overlapping counters
Robert Richter [Fri, 18 Nov 2011 11:35:22 +0000 (12:35 +0100)]
perf, x86: Fix event scheduler for constraints with overlapping counters

The current x86 event scheduler fails to resolve scheduling problems
of certain combinations of events and constraints. This happens if the
counter mask of such an event is not a subset of any other counter
mask of a constraint with an equal or higher weight, e.g. constraints
of the AMD family 15h pmu:

                        counter mask    weight

 amd_f15_PMC30          0x09            2  <--- overlapping counters
 amd_f15_PMC20          0x07            3
 amd_f15_PMC53          0x38            3

The scheduler does not find then an existing solution. Here is an
example:

 event code     counter         failure         possible solution

 0x02E          PMC[3,0]        0               3
 0x043          PMC[2:0]        1               0
 0x045          PMC[2:0]        2               1
 0x046          PMC[2:0]        FAIL            2

The event scheduler may not select the correct counter in the first
cycle because it needs to know which subsequent events will be
scheduled. It may fail to schedule the events then.

To solve this, we now save the scheduler state of events with
overlapping counter counstraints.  If we fail to schedule the events
we rollback to those states and try to use another free counter.

Constraints with overlapping counters are marked with a new introduced
overlap flag. We set the overlap flag for such constraints to give the
scheduler a hint which events to select for counter rescheduling. The
EVENT_CONSTRAINT_OVERLAP() macro can be used for this.

Care must be taken as the rescheduling algorithm is O(n!) which will
increase scheduling cycles for an over-commited system dramatically.
The number of such EVENT_CONSTRAINT_OVERLAP() macros and its counter
masks must be kept at a minimum. Thus, the current stack is limited to
2 states to limit the number of loops the algorithm takes in the worst
case.

On systems with no overlapping-counter constraints, this
implementation does not increase the loop count compared to the
previous algorithm.

V2:
* Renamed redo -> overlap.
* Reimplementation using perf scheduling helper functions.

V3:
* Added WARN_ON_ONCE() if out of save states.
* Changed function interface of perf_sched_restore_state() to use bool
  as return value.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1321616122-1533-3-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf, x86: Implement event scheduler helper functions
Robert Richter [Fri, 18 Nov 2011 11:35:21 +0000 (12:35 +0100)]
perf, x86: Implement event scheduler helper functions

This patch introduces x86 perf scheduler code helper functions. We
need this to later add more complex functionality to support
overlapping counter constraints (next patch).

The algorithm is modified so that the range of weight values is now
generated from the constraints. There shouldn't be other functional
changes.

With the helper functions the scheduler is controlled. There are
functions to initialize, traverse the event list, find unused counters
etc. The scheduler keeps its own state.

V3:
* Added macro for_each_set_bit_cont().
* Changed functions interfaces of perf_sched_find_counter() and
  perf_sched_next_event() to use bool as return value.
* Added some comments to make code better understandable.

V4:
* Fix broken event assignment if weight of the first event is not
  wmin (perf_sched_init()).

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1321616122-1533-2-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf: Avoid a useless pmu_disable() in the perf-tick
Peter Zijlstra [Wed, 16 Nov 2011 13:38:16 +0000 (14:38 +0100)]
perf: Avoid a useless pmu_disable() in the perf-tick

Gleb writes:

 > Currently pmu is disabled and re-enabled on each timer interrupt even
 > when no rotation or frequency adjustment is needed. On Intel CPU this
 > results in two writes into PERF_GLOBAL_CTRL MSR per tick. On bare metal
 > it does not cause significant slowdown, but when running perf in a virtual
 > machine it leads to 20% slowdown on my machine.

Cure this by keeping a perf_event_context::nr_freq counter that counts the
number of active events that require frequency adjustments and use this in a
similar fashion to the already existing nr_events != nr_active test in
perf_rotate_context().

By being able to exclude both rotation and frequency adjustments a-priory for
the common case we can avoid the otherwise superfluous PMU disable.

Suggested-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-515yhoatehd3gza7we9fapaa@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoMerge branch 'perf/urgent' into perf/core
Ingo Molnar [Tue, 6 Dec 2011 05:42:35 +0000 (06:42 +0100)]
Merge branch 'perf/urgent' into perf/core

Merge reason: Add these cherry-picked commits so that future changes
              on perf/core don't conflict.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoftrace: Fix hash record accounting bug
Steven Rostedt [Sat, 5 Nov 2011 00:32:39 +0000 (20:32 -0400)]
ftrace: Fix hash record accounting bug

If the set_ftrace_filter is cleared by writing just whitespace to
it, then the filter hash refcounts will be decremented but not
updated. This causes two bugs:

1) No functions will be enabled for tracing when they all should be

2) If the users clears the set_ftrace_filter twice, it will crash ftrace:

------------[ cut here ]------------
WARNING: at /home/rostedt/work/git/linux-trace.git/kernel/trace/ftrace.c:1384 __ftrace_hash_rec_update.part.27+0x157/0x1a7()
Modules linked in:
Pid: 2330, comm: bash Not tainted 3.1.0-test+ #32
Call Trace:
 [<ffffffff81051828>] warn_slowpath_common+0x83/0x9b
 [<ffffffff8105185a>] warn_slowpath_null+0x1a/0x1c
 [<ffffffff810ba362>] __ftrace_hash_rec_update.part.27+0x157/0x1a7
 [<ffffffff810ba6e8>] ? ftrace_regex_release+0xa7/0x10f
 [<ffffffff8111bdfe>] ? kfree+0xe5/0x115
 [<ffffffff810ba51e>] ftrace_hash_move+0x2e/0x151
 [<ffffffff810ba6fb>] ftrace_regex_release+0xba/0x10f
 [<ffffffff8112e49a>] fput+0xfd/0x1c2
 [<ffffffff8112b54c>] filp_close+0x6d/0x78
 [<ffffffff8113a92d>] sys_dup3+0x197/0x1c1
 [<ffffffff8113a9a6>] sys_dup2+0x4f/0x54
 [<ffffffff8150cac2>] system_call_fastpath+0x16/0x1b
---[ end trace 77a3a7ee73794a02 ]---

Link: http://lkml.kernel.org/r/20111101141420.GA4918@debian
Reported-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoperf: Fix parsing of __print_flags() in TP_printk()
Steven Rostedt [Fri, 4 Nov 2011 20:32:25 +0000 (16:32 -0400)]
perf: Fix parsing of __print_flags() in TP_printk()

A update is made to the sched:sched_switch event that adds some
logic to the first parameter of the __print_flags() that shows the
state of tasks. This change cause perf to fail parsing the flags.

A simple fix is needed to have the parser be able to process ops
within the argument.

Cc: stable@vger.kernel.org
Reported-by: Andrew Vagin <avagin@openvz.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agojump_label: jump_label_inc may return before the code is patched
Gleb Natapov [Tue, 18 Oct 2011 17:55:51 +0000 (19:55 +0200)]
jump_label: jump_label_inc may return before the code is patched

If cpu A calls jump_label_inc() just after atomic_add_return() is
called by cpu B, atomic_inc_not_zero() will return value greater then
zero and jump_label_inc() will return to a caller before jump_label_update()
finishes its job on cpu B.

Link: http://lkml.kernel.org/r/20111018175551.GH17571@redhat.com
Cc: stable@vger.kernel.org
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoftrace: Remove force undef config value left for testing
Steven Rostedt [Fri, 4 Nov 2011 14:45:23 +0000 (10:45 -0400)]
ftrace: Remove force undef config value left for testing

A forced undef of a config value was used for testing and was
accidently left in during the final commit. This causes x86 to
run slower than needed while running function tracing as well
as causes the function graph selftest to fail when DYNMAIC_FTRACE
is not set. This is because the code in MCOUNT expects the ftrace
code to be processed with the config value set that happened to
be forced not set.

The forced config option was left in by:
    commit 6331c28c962561aee59e5a493b7556a4bb585957
    ftrace: Fix dynamic selftest failure on some archs

Link: http://lkml.kernel.org/r/20111102150255.GA6973@debian
Cc: stable@vger.kernel.org
Reported-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agotracing: Restore system filter behavior
Li Zefan [Tue, 1 Nov 2011 01:09:35 +0000 (09:09 +0800)]
tracing: Restore system filter behavior

Though not all events have field 'prev_pid', it was allowed to do this:

  # echo 'prev_pid == 100' > events/sched/filter

but commit 75b8e98263fdb0bfbdeba60d4db463259f1fe8a2 (tracing/filter: Swap
entire filter of events) broke it without any reason.

Link: http://lkml.kernel.org/r/4EAF46CF.8040408@cn.fujitsu.com
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agotracing: fix event_subsystem ref counting
Ilya Dryomov [Mon, 31 Oct 2011 09:07:42 +0000 (11:07 +0200)]
tracing: fix event_subsystem ref counting

Fix a bug introduced by e9dbfae5, which prevents event_subsystem from
ever being released.

Ref_count was added to keep track of subsystem users, not for counting
events.  Subsystem is created with ref_count = 1, so there is no need to
increment it for every event, we have nr_events for that.  Fix this by
touching ref_count only when we actually have a new user -
subsystem_open().

Cc: stable@vger.kernel.org
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Link: http://lkml.kernel.org/r/1320052062-7846-1-git-send-email-idryomov@gmail.com
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agox86/tools: Add decoded instruction dump mode
Masami Hiramatsu [Mon, 5 Dec 2011 12:06:03 +0000 (21:06 +0900)]
x86/tools: Add decoded instruction dump mode

Add instruction dump mode to insn_sanity tool for
checking decoder really decoded instructions.

This mode is enabled when passing double -v (-vv) to
insn_sanity. It is useful for who wants to check whether
the decoder can decode some instructions correctly.
e.g.
 $ echo 0f 73 10 11 | ./insn_sanity -y -vv -i -
 Instruction = {
        .prefixes = {
                .value = 0, bytes[] = {0, 0, 0, 0},
                .got = 1, .nbytes = 0},
        .rex_prefix = {
                .value = 0, bytes[] = {0, 0, 0, 0},
                .got = 1, .nbytes = 0},
        .vex_prefix = {
                .value = 0, bytes[] = {0, 0, 0, 0},
                .got = 1, .nbytes = 0},
        .opcode = {
                .value = 29455, bytes[] = {f, 73, 0, 0},
                .got = 1, .nbytes = 2},
        .modrm = {
                .value = 16, bytes[] = {10, 0, 0, 0},
                .got = 1, .nbytes = 1},
        .sib = {
                .value = 0, bytes[] = {0, 0, 0, 0},
                .got = 1, .nbytes = 0},
        .displacement = {
                .value = 0, bytes[] = {0, 0, 0, 0},
                .got = 1, .nbytes = 0},
        .immediate1 = {
                .value = 17, bytes[] = {11, 0, 0, 0},
                .got = 1, .nbytes = 1},
        .immediate2 = {
                .value = 0, bytes[] = {0, 0, 0, 0},
                .got = 0, .nbytes = 0},
        .attr = 44800, .opnd_bytes = 4, .addr_bytes = 8,
        .length = 4, .x86_64 = 1, .kaddr = 0x7fff0f7d9430}
 Success: decoded and checked 1 given instructions with 0 errors (seed:0x0)

Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20111205120603.15475.91192.stgit@cloud
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agox86: Update instruction decoder to support new AVX formats
Masami Hiramatsu [Mon, 5 Dec 2011 12:05:57 +0000 (21:05 +0900)]
x86: Update instruction decoder to support new AVX formats

Since new Intel software developers manual introduces
new format for AVX instruction set (including AVX2),
it is important to update x86-opcode-map.txt to fit
those changes.

Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20111205120557.15475.13236.stgit@cloud
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agox86/tools: Fix insn_sanity message outputs
Masami Hiramatsu [Mon, 5 Dec 2011 12:05:50 +0000 (21:05 +0900)]
x86/tools: Fix insn_sanity message outputs

Fix x86 instruction decoder test to dump all error messages
to stderr and others to stdout.

Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20111205120550.15475.70149.stgit@cloud
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agox86/tools: Fix instruction decoder message output
Masami Hiramatsu [Mon, 5 Dec 2011 12:05:45 +0000 (21:05 +0900)]
x86/tools: Fix instruction decoder message output

Fix instruction decoder test (insn_sanity), so that
it doesn't show both info and error messages twice on
same instruction. (In that case, show only error message)

Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20111205120545.15475.7928.stgit@cloud
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agox86: Fix instruction decoder to handle grouped AVX instructions
Masami Hiramatsu [Mon, 5 Dec 2011 12:05:39 +0000 (21:05 +0900)]
x86: Fix instruction decoder to handle grouped AVX instructions

For reducing memory usage of attribute table, x86 instruction
decoder puts "Group" attribute only on "no-last-prefix"
attribute table (same as vex_p == 0 case).

Thus, the decoder should look no-last-prefix table first, and
then only if it is not a group, move on to "with-last-prefix"
table (vex_p != 0).

However, current implementation, inat_get_avx_attribute()
looks with-last-prefix directly. So, when decoding
a grouped AVX instruction, the decoder fails to find correct
group because there is no "Group" attribute on the table.
This ends up with the mis-decoding of instructions, as Ingo
reported in http://thread.gmane.org/gmane.linux.kernel/1214103

This patch fixes it to check no-last-prefix table first
even if that is an AVX instruction, and get an attribute from
"with last-prefix" table only if that is not a group.

Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20111205120539.15475.91428.stgit@cloud
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agox86/tools: Fix Makefile to build all test tools
Masami Hiramatsu [Mon, 5 Dec 2011 12:05:33 +0000 (21:05 +0900)]
x86/tools: Fix Makefile to build all test tools

Fix arch/x86/tools/Makefile to compile both test tools
correctly. This bug leads build error.

Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: yrl.pp-manager.tt@hitachi.com
Link: http://lkml.kernel.org/r/20111205120533.15475.62047.stgit@cloud
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoMerge branch 'tip/perf/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/roste...
Ingo Molnar [Mon, 5 Dec 2011 13:34:00 +0000 (14:34 +0100)]
Merge branch 'tip/perf/urgent' of git://git./linux/kernel/git/rostedt/linux-trace into perf/urgent

13 years agoMerge branch 'perf/urgent' of git://github.com/acmel/linux into perf/urgent
Ingo Molnar [Mon, 5 Dec 2011 09:32:39 +0000 (10:32 +0100)]
Merge branch 'perf/urgent' of git://github.com/acmel/linux into perf/urgent

13 years agoperf: Fix loss of notification with multi-event
Peter Zijlstra [Sat, 26 Nov 2011 01:47:31 +0000 (02:47 +0100)]
perf: Fix loss of notification with multi-event

When you do:
        $ perf record -e cycles,cycles,cycles noploop 10

You expect about 10,000 samples for each event, i.e., 10s at
1000samples/sec. However, this is not what's happening. You
get much fewer samples, maybe 3700 samples/event:

$ perf report -D | tail -15
Aggregated stats:
           TOTAL events:      10998
            MMAP events:         66
            COMM events:          2
          SAMPLE events:      10930
cycles stats:
           TOTAL events:       3644
          SAMPLE events:       3644
cycles stats:
           TOTAL events:       3642
          SAMPLE events:       3642
cycles stats:
           TOTAL events:       3644
          SAMPLE events:       3644

On a Intel Nehalem or even AMD64, there are 4 counters capable
of measuring cycles, so there is plenty of space to measure those
events without multiplexing (even with the NMI watchdog active).
And even with multiplexing, we'd expect roughly the same number
of samples per event.

The root of the problem was that when the event that caused the buffer
to become full was not the first event passed on the cmdline, the user
notification would get lost. The notification was sent to the file
descriptor of the overflowed event but the perf tool was not polling
on it.  The perf tool aggregates all samples into a single buffer,
i.e., the buffer of the first event. Consequently, it assumes
notifications for any event will come via that descriptor.

The seemingly straight forward solution of moving the waitq into the
ringbuffer object doesn't work because of life-time issues. One could
perf_event_set_output() on a fd that you're also blocking on and cause
the old rb object to be freed while its waitq would still be
referenced by the blocked thread -> FAIL.

Therefore link all events to the ringbuffer and broadcast the wakeup
from the ringbuffer object to all possible events that could be waited
upon. This is rather ugly, and we're open to better solutions but it
works for now.

Reported-by: Stephane Eranian <eranian@google.com>
Finished-by: Stephane Eranian <eranian@google.com>
Reviewed-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20111126014731.GA7030@quad
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf, x86: Force IBS LVT offset assignment for family 10h
Robert Richter [Tue, 8 Nov 2011 18:20:44 +0000 (19:20 +0100)]
perf, x86: Force IBS LVT offset assignment for family 10h

On AMD family 10h we see firmware bug messages like the following:

 [Firmware Bug]: cpu 6, try to use APIC500 (LVT offset 0) for vector 0x10400, but the register is already in use for vector 0xf9 on another cpu
 [Firmware Bug]: cpu 6, IBS interrupt offset 0 not available (MSRC001103A=0x0000000000000100)
 [Firmware Bug]: using offset 1 for IBS interrupts
 [Firmware Bug]: workaround enabled for IBS LVT offset
 perf: AMD IBS detected (0x00000007)

We always see this, since the offsets are not assigned by the BIOS for
this family. Force LVT offset assignment in this case. If the OS
assignment fails, fallback to BIOS settings and try to setup this.

The fallback to BIOS settings weakens the family check since
force_ibs_eilvt_setup() may fail e.g. in case of virtual machines.
But setup may still succeed if BIOS offsets are correct.

Other families don't have a workaround implemented that assigns LVT
offsets. It's ok, to drop calling force_ibs_eilvt_setup() for that
families.

With the patch the [Firmware Bug] messages vanish. We see now:

 IBS: LVT offset 1 assigned
 perf: AMD IBS detected (0x00000007)

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20111109162225.GO12451@erda.amd.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf, x86: Disable PEBS on SandyBridge chips
Peter Zijlstra [Tue, 15 Nov 2011 09:51:15 +0000 (10:51 +0100)]
perf, x86: Disable PEBS on SandyBridge chips

Cc: Stephane Eranian <eranian@google.com>
Cc: stable@kernel.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf test: Soft errors shouldn't stop the "Validate PERF_RECORD_" test
Arnaldo Carvalho de Melo [Fri, 2 Dec 2011 15:53:04 +0000 (13:53 -0200)]
perf test: Soft errors shouldn't stop the "Validate PERF_RECORD_" test

For errors that don't preclude checking for further errors, aka "soft"
errors, just  continue testing for other errors.

Better coverage in verbose mode.

Suggested-by: David Ahern <dsahern@gmail.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-jafcokbj26m845dsgm2hx6az@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf test: Validate PERF_RECORD_ events and perf_sample fields
Arnaldo Carvalho de Melo [Fri, 2 Dec 2011 13:13:50 +0000 (11:13 -0200)]
perf test: Validate PERF_RECORD_ events and perf_sample fields

This new test will validate these new routines extracted from 'perf
record':

 - perf_evlist__config_attrs
 - perf_evlist__prepare_workload
 - perf_evlist__start_workload

In addition to several other perf_evlist methods.

It consists of starting a simple workload, setting up just one event to
monitor ("cycles") requesting that several PERF_SAMPLE_ fields be
present in all events.

It then will check that the expected PERF_RECORD_ events are produced
and will sanity check all its fields.

Some checks performed:

. PERF_SAMPLE_TIME monotonically increases.

. PERF_SAMPLE_CPU is the one requested with sched_setaffinity

. PERF_SAMPLE_TID and PERF_SAMPLE_PID matches the one we forked
  in perf_evlist__prepare_workload and that is stored in
  evlist->workload.pid

. For the events where these fields are also present in its
  pre-sample_id_all fields (e.g. event->mmap.pid), that they are what
  is expected too.

. That we get a bunch of mmaps:

  PATH/libcSUFFIX
  PATH/ldSUFFIX
  [vdso]
  PATH/sleep

Example:

  [root@emilia ~]# taskset -c 3,4 perf test -v1 perf_sample
   6: Validate PERF_RECORD_* events & perf_sample fields:
  --- start ---
  7159480799825 3 PERF_RECORD_SAMPLE
  7159480805584 3 PERF_RECORD_SAMPLE
  7159480807814 3 PERF_RECORD_SAMPLE
  7159480810430 3 PERF_RECORD_SAMPLE
  7159480861511 3 PERF_RECORD_MMAP 8086/8086: [0x7fffffffd000(0x2000) @ 0x7fffffffd000]: //anon
  7159481052516 3 PERF_RECORD_COMM: sleep:8086
  7159481070188 3 PERF_RECORD_MMAP 8086/8086: [0x400000(0x6000) @ 0]: /bin/sleep
  7159481077104 3 PERF_RECORD_MMAP 8086/8086: [0x3d06400000(0x221000) @ 0]: /lib64/ld-2.12.so
  7159481092912 3 PERF_RECORD_MMAP 8086/8086: [0x7fff1adff000(0x1000) @ 0x7fff1adff000]: [vdso]
  7159481196779 3 PERF_RECORD_MMAP 8086/8086: [0x3d06800000(0x37f000) @ 0]: /lib64/libc-2.12.so
  7160481558435 3 PERF_RECORD_EXIT(8086:8086):(8086:8086)
  ---- end ----
  Validate PERF_RECORD_* events & perf_sample fields: Ok
  [root@emilia ~]#

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-svag18v2z4idas0dyz3umjpq@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf event: Introduce perf_event__fprintf
Arnaldo Carvalho de Melo [Fri, 2 Dec 2011 13:06:37 +0000 (11:06 -0200)]
perf event: Introduce perf_event__fprintf

So that tools like 'perf test' can print the events when in verbose
mode, for instance.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xnovdqfi25nc48gy6604k7yp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agotrace_events_filter: Use rcu_assign_pointer() when setting ftrace_event_call->filter
Tejun Heo [Wed, 23 Nov 2011 16:49:49 +0000 (08:49 -0800)]
trace_events_filter: Use rcu_assign_pointer() when setting ftrace_event_call->filter

ftrace_event_call->filter is sched RCU protected but didn't use
rcu_assign_pointer().  Use it.

TODO: Add proper __rcu annotation to call->filter and all its users.

-v2: Use RCU_INIT_POINTER() for %NULL clearing as suggested by Eric.

Link: http://lkml.kernel.org/r/20111123164949.GA29639@google.com
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: stable@kernel.org # (2.6.39+)
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoperf test: Allow running just a subset of the available tests
Arnaldo Carvalho de Melo [Tue, 29 Nov 2011 14:52:07 +0000 (12:52 -0200)]
perf test: Allow running just a subset of the available tests

To obtain a list of available tests:

[root@emilia linux]# perf test list
 1: vmlinux symtab matches kallsyms
 2: detect open syscall event
 3: detect open syscall event on all cpus
 4: read samples using the mmap interface
 5: parse events tests
[root@emilia linux]#

To list just a subset:

[root@emilia linux]# perf test list syscall
 2: detect open syscall event
 3: detect open syscall event on all cpus
[root@emilia linux]#

To run a subset:

[root@emilia linux]# perf test detect
 2: detect open syscall event: Ok
 3: detect open syscall event on all cpus: Ok
[root@emilia linux]#

Specific tests can be chosen by number:

[root@emilia linux]# perf test 1 3 parse
 1: vmlinux symtab matches kallsyms: Ok
 3: detect open syscall event on all cpus: Ok
 5: parse events tests: Ok
[root@emilia linux]#

Now to write more tests!

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-nqec2145qfxdgimux28aw7v8@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evlist: Always do automatic allocation of pollfd and mmap structures
Arnaldo Carvalho de Melo [Tue, 29 Nov 2011 10:05:52 +0000 (08:05 -0200)]
perf evlist: Always do automatic allocation of pollfd and mmap structures

At first tools were required to do that, but while writing the python
bindings to simplify the API I made them auto-allocate when needed.

This just makes record, stat and top use that auto allocation,
simplifying them a bit.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-iokhcvkzzijr3keioubx8hlq@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Save some loops using perf_evlist__id2evsel
Arnaldo Carvalho de Melo [Mon, 28 Nov 2011 19:57:40 +0000 (17:57 -0200)]
perf tools: Save some loops using perf_evlist__id2evsel

Since we already ask for PERF_SAMPLE_ID and use it to quickly find the
associated evsel, add handler func + data to struct perf_evsel to avoid
using chains of if(strcmp(event_name)) and also to avoid all the linear
list searches via trace_event_find.

To demonstrate the technique convert 'perf sched' to it:

 # perf sched record sleep 5m

And then:

 Performance counter stats for '/tmp/oldperf sched lat':

        646.929438 task-clock                #    0.999 CPUs utilized
                 9 context-switches          #    0.000 M/sec
                 0 CPU-migrations            #    0.000 M/sec
            20,901 page-faults               #    0.032 M/sec
     1,290,144,450 cycles                    #    1.994 GHz
   <not supported> stalled-cycles-frontend
   <not supported> stalled-cycles-backend
     1,606,158,439 instructions              #    1.24  insns per cycle
       339,088,395 branches                  #  524.151 M/sec
         4,550,735 branch-misses             #    1.34% of all branches

       0.647524759 seconds time elapsed

Versus:

 Performance counter stats for 'perf sched lat':

        473.564691 task-clock                #    0.999 CPUs utilized
                 9 context-switches          #    0.000 M/sec
                 0 CPU-migrations            #    0.000 M/sec
            20,903 page-faults               #    0.044 M/sec
       944,367,984 cycles                    #    1.994 GHz
   <not supported> stalled-cycles-frontend
   <not supported> stalled-cycles-backend
     1,442,385,571 instructions              #    1.53  insns per cycle
       308,383,106 branches                  #  651.195 M/sec
         4,481,784 branch-misses             #    1.45% of all branches

       0.474215751 seconds time elapsed

[root@emilia ~]#

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1kbzpl74lwi6lavpqke2u2p3@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf script: Add comm filtering option
David Ahern [Mon, 21 Nov 2011 17:02:52 +0000 (10:02 -0700)]
perf script: Add comm filtering option

Allows collecting events system wide and then pulling out events for a
specific task name(s). e.g,

    perf script -c gnome-shell,gnome-terminal

Applies on top of:
    https://lkml.org/lkml/2011/11/13/74

v2->v3
- update Documentation

v1->v2
- use comm_list from symbol_conf

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1321894972-24246-1-git-send-email-dsahern@gmail.com
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: make -C consistent across commands (for cpu list arg)
David Ahern [Sun, 13 Nov 2011 18:30:08 +0000 (11:30 -0700)]
perf tools: make -C consistent across commands (for cpu list arg)

Currently the meaning of -C varies by perf command: for perf-top,
perf-stat, perf-record it means cpu list. For perf-report it means comm
list. Then perf-annotate, perf-report and perf-script use -c for cpu
list.

Fix annotate, report and script to use -C for cpu list to be consistent
with top, stat and record. This means report needs to use -c for comm
list which does introduce a backward compatibility change.

v1 -> v2
- update perf-script.txt too

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1321209008-7004-1-git-send-email-dsahern@gmail.com
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf top: Stop using globals for tool state
Arnaldo Carvalho de Melo [Mon, 28 Nov 2011 11:37:05 +0000 (09:37 -0200)]
perf top: Stop using globals for tool state

Use its 'perf_tool' base class instead.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-i33q40wwvk2zna8fd36ex6sm@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Rename perf_event_ops to perf_tool
Arnaldo Carvalho de Melo [Mon, 28 Nov 2011 10:30:20 +0000 (08:30 -0200)]
perf tools: Rename perf_event_ops to perf_tool

To better reflect that it became the base class for all tools, that must
be in each tool struct and where common stuff will be put.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qgpc4msetqlwr8y2k7537cxe@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Resolve machine earlier and pass it to perf_event_ops
Arnaldo Carvalho de Melo [Mon, 28 Nov 2011 09:56:39 +0000 (07:56 -0200)]
perf tools: Resolve machine earlier and pass it to perf_event_ops

Reducing the exposure of perf_session further, so that we can use the
classes in cases where no perf.data file is created.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-stua66dcscsezzrcdugvbmvd@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Pass tool context in the the perf_event_ops functions
Arnaldo Carvalho de Melo [Fri, 25 Nov 2011 10:19:45 +0000 (08:19 -0200)]
perf tools: Pass tool context in the the perf_event_ops functions

So that we don't need to have that many globals.

Next steps will remove the 'session' pointer, that in most cases is
not needed.

Then we can rename perf_event_ops to 'perf_tool' that better describes
this class hierarchy.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-wp4djox7x6w1i2bab1pt4xxp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf annotate: Group options in a struct
Arnaldo Carvalho de Melo [Thu, 17 Nov 2011 14:33:21 +0000 (12:33 -0200)]
perf annotate: Group options in a struct

Paving the way to remove these globals when we change the perf_event_ops
to receive as a first parameter a pointer to a perf_event_ops that will
then provide access to perf_annotate via container_of.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xduzibqrdg3h5cttmk6p5wwc@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf report: Group options in a struct
Arnaldo Carvalho de Melo [Thu, 17 Nov 2011 14:19:04 +0000 (12:19 -0200)]
perf report: Group options in a struct

Paving the way to remove these globals when we change the perf_event_ops
to receive as a first parameter a pointer to a perf_event_ops that will
then provide access to perf_report via container_of.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-2eh2vi2nb5z3tg1lvoxv09xu@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Use evsel->attr.sample_type instead of session->sample_type
Arnaldo Carvalho de Melo [Wed, 16 Nov 2011 19:02:54 +0000 (17:02 -0200)]
perf tools: Use evsel->attr.sample_type instead of session->sample_type

Eventually session->sample_type will go away as we want to support
multiple sample types per session, so use it from the evsel which is a
step in that direction.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-0vwdpjcwbjezw459lw5n3ew1@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf session: Remove superfluous callchain_cursor member
Arnaldo Carvalho de Melo [Sat, 12 Nov 2011 01:10:26 +0000 (23:10 -0200)]
perf session: Remove superfluous callchain_cursor member

Since we have it in evsel->hists.callchain_cursor, remove it from
perf_session.

One more step in disentangling several places from requiring a
perf_session pointer.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-rxr5dj3di7ckyfmnz0naku1z@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf event: perf_event_ops->attr() manipulates only an evlist
Arnaldo Carvalho de Melo [Sat, 12 Nov 2011 00:45:41 +0000 (22:45 -0200)]
perf event: perf_event_ops->attr() manipulates only an evlist

Removing another case where a perf_session is required when processing
events.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ug1wtjbnva4bxwknflkkrlrh@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evlist: Introduce id_hdr_size method out of perf_session
Arnaldo Carvalho de Melo [Sat, 12 Nov 2011 00:28:50 +0000 (22:28 -0200)]
perf evlist: Introduce id_hdr_size method out of perf_session

We will need this when not using perf_session in cases like 'perf top'
and strace where no perf.data file is created nor consumed.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-za923wjc41q5xot5vrhuhj3j@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf symbols: Add nr_events to symbol_conf
Arnaldo Carvalho de Melo [Sat, 12 Nov 2011 00:17:32 +0000 (22:17 -0200)]
perf symbols: Add nr_events to symbol_conf

Since symbol__alloc_hists need it, to avoid passing it around in many
functions have it in the symbol_conf struct.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-cwv8ysvpywzjq4v3xtbd4zwv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf ui progress: Fix divide by zero
Arnaldo Carvalho de Melo [Sat, 12 Nov 2011 00:08:07 +0000 (22:08 -0200)]
perf ui progress: Fix divide by zero

Happens in a perf.data file where one of the events had no samples.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-j7st3oyiotvfxqde2nc41kxb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf record: Move 'group' to perf_event_ops
Arnaldo Carvalho de Melo [Fri, 11 Nov 2011 17:12:56 +0000 (15:12 -0200)]
perf record: Move 'group' to perf_event_ops

Will be used in other tools to share the command line parsing code.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-8x0yr77r6lrd2t699s499m8n@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf session: Move threads to struct machine
Arnaldo Carvalho de Melo [Wed, 9 Nov 2011 15:24:25 +0000 (13:24 -0200)]
perf session: Move threads to struct machine

The 'machine' abstraction was introduced with 'perf kvm' where we could
have samples for the host and multiple guests, but at the time we ended
up keeping the list of all machines threads all in
session->host_machine.

Move the threads rb_tree to struct machine to separate the namespaces.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-mdg7sm6j3va09vtgj49gbsrp@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf record: Move mmap_pages to perf_record_opts
Arnaldo Carvalho de Melo [Wed, 9 Nov 2011 11:16:26 +0000 (09:16 -0200)]
perf record: Move mmap_pages to perf_record_opts

Tools being developed will need this to allow the user to override this
value.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-zydc1yhxfm0z35fuy95bsn1l@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evlist: Handle default value for 'pages' on mmap method
Arnaldo Carvalho de Melo [Wed, 9 Nov 2011 11:10:47 +0000 (09:10 -0200)]
perf evlist: Handle default value for 'pages' on mmap method

Every tool that calls this and allows the user to override the value
needs this logic.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-lwscxpg57xfzahz5dmdfp9uz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evlist: Introduce {prepare,start}_workload refactored from 'perf record'
Arnaldo Carvalho de Melo [Wed, 9 Nov 2011 10:47:15 +0000 (08:47 -0200)]
perf evlist: Introduce {prepare,start}_workload refactored from 'perf record'

So that we can easily start a workload in other tools.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-zdsksd4aphu0nltg2lpwsw3x@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evsel: Introduce config attr method
Arnaldo Carvalho de Melo [Tue, 8 Nov 2011 16:41:57 +0000 (14:41 -0200)]
perf evsel: Introduce config attr method

Out of the code in 'perf record', so that we can share option parsing,
etc. Eventually will be used by 'perf top', but first 'trace' will use
it.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-hzjqsgnte1esk90ytq0ap98v@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evlist: Introduce add_tracepoints method
Arnaldo Carvalho de Melo [Sat, 5 Nov 2011 10:41:51 +0000 (08:41 -0200)]
perf evlist: Introduce add_tracepoints method

Convenient way of asking for tracepoint events to be added to an
existing evlist.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-0ylj4wrg54791u0baqb9swbb@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf evlist: Introduce perf_evlist__add_attrs
Arnaldo Carvalho de Melo [Fri, 4 Nov 2011 11:10:59 +0000 (09:10 -0200)]
perf evlist: Introduce perf_evlist__add_attrs

Replacing the open coded equivalents in 'perf stat'.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-1btwadnf2tds2g07hsccsdse@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Simplify debugfs mountpoint handling code
Arnaldo Carvalho de Melo [Wed, 16 Nov 2011 16:03:07 +0000 (14:03 -0200)]
perf tools: Simplify debugfs mountpoint handling code

We don't need to have two PATH_MAX char sized arrays holding it, just
one in util/debugfs.c will do.

Also rename debugfs_path to tracing_events_path, as it is not the path
to debugfs, that is debugfs_mountpoint. Both are now accessible.

This will allow accessing this code in the perf python binding without
having to drag in perf.c and util/parse-events.c.

The defaults for these variables are the canonical "/sys/kernel/debug"
and "/sys/kernel/debug/tracing/events/", removing the need for simple
tools to call debugfs_mount(NULL).

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ug9jvtjrsqbluuhqqxpvg30f@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf tools: Eliminate duplicate code and use PATH_MAX consistently
Arnaldo Carvalho de Melo [Wed, 16 Nov 2011 14:55:59 +0000 (12:55 -0200)]
perf tools: Eliminate duplicate code and use PATH_MAX consistently

No need for multiple definitions for STR() and die(), also use SuSv2's
PATH_MAX instead of adding MAX_PATH.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qpujjkw7u0bf0tr4wt55cr9y@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agotracing: Add entries in buffer and total entries to default output header
Steven Rostedt [Thu, 17 Nov 2011 15:35:16 +0000 (10:35 -0500)]
tracing: Add entries in buffer and total entries to default output header

Knowing the number of event entries in the ring buffer compared
to the total number that were written is useful information. The
latency format gives this information and there's no reason that the
default format does not.

This information is now added to the default header, along with the
number of online CPUs:

 # tracer: nop
 #
 # entries-in-buffer/entries-written: 159836/64690869   #P:4
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
           <idle>-0     [000] ...2    49.442971: local_touch_nmi <-cpu_idle
           <idle>-0     [000] d..2    49.442973: enter_idle <-cpu_idle
           <idle>-0     [000] d..2    49.442974: atomic_notifier_call_chain <-enter_idle
           <idle>-0     [000] d..2    49.442976: __atomic_notifier_call_chain <-atomic_notifier

The above shows that the trace contains 159836 entries, but
64690869 were written. One could figure out that there were
64531033 entries that were dropped.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agotracing: Add irq, preempt-count and need resched info to default trace output
Steven Rostedt [Thu, 17 Nov 2011 14:34:33 +0000 (09:34 -0500)]
tracing: Add irq, preempt-count and need resched info to default trace output

People keep asking how to get the preempt count, irq, and need resched info
and we keep telling them to enable the latency format. Some developers think
that traces without this info is completely useless, and for a lot of tasks
it is useless.

The first option was to enable the latency trace as the default format, but
the header for the latency format is pretty useless for most tracers and
it also does the timestamp in straight microseconds from the time the trace
started. This is sometimes more difficult to read as the default trace is
seconds from the start of boot up.

Latency format:

 # tracer: nop
 #
 # nop latency trace v1.1.5 on 3.2.0-rc1-test+
 # --------------------------------------------------------------------
 # latency: 0 us, #159771/64234230, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
 #    -----------------
 #    | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
 #    -----------------
 #
 #                  _------=> CPU#
 #                 / _-----=> irqs-off
 #                | / _----=> need-resched
 #                || / _---=> hardirq/softirq
 #                ||| / _--=> preempt-depth
 #                |||| /     delay
 #  cmd     pid   ||||| time  |   caller
 #     \   /      |||||  \    |   /
 migratio-6       0...2 41778231us+: rcu_note_context_switch <-__schedule
 migratio-6       0...2 41778233us : trace_rcu_utilization <-rcu_note_context_switch
 migratio-6       0...2 41778235us+: rcu_sched_qs <-rcu_note_context_switch
 migratio-6       0d..2 41778236us+: rcu_preempt_qs <-rcu_note_context_switch
 migratio-6       0...2 41778238us : trace_rcu_utilization <-rcu_note_context_switch
 migratio-6       0...2 41778239us+: debug_lockdep_rcu_enabled <-__schedule

default format:

 # tracer: nop
 #
 #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
 #              | |       |          |         |
      migration/0-6     [000]    50.025810: rcu_note_context_switch <-__schedule
      migration/0-6     [000]    50.025812: trace_rcu_utilization <-rcu_note_context_switch
      migration/0-6     [000]    50.025813: rcu_sched_qs <-rcu_note_context_switch
      migration/0-6     [000]    50.025815: rcu_preempt_qs <-rcu_note_context_switch
      migration/0-6     [000]    50.025817: trace_rcu_utilization <-rcu_note_context_switch
      migration/0-6     [000]    50.025818: debug_lockdep_rcu_enabled <-__schedule
      migration/0-6     [000]    50.025820: debug_lockdep_rcu_enabled <-__schedule

The latency format header has latency information that is pretty meaningless
for most tracers. Although some of the header is useful, and we can add that
later to the default format as well.

What is really useful with the latency format is the irqs-off, need-resched
hard/softirq context and the preempt count.

This commit adds the option irq-info which is on by default that adds this
information:

 # tracer: nop
 #
 #                              _-----=> irqs-off
 #                             / _----=> need-resched
 #                            | / _---=> hardirq/softirq
 #                            || / _--=> preempt-depth
 #                            ||| /     delay
 #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
 #              | |       |   ||||       |         |
           <idle>-0     [000] d..2    49.309305: cpuidle_get_driver <-cpuidle_idle_call
           <idle>-0     [000] d..2    49.309307: mwait_idle <-cpu_idle
           <idle>-0     [000] d..2    49.309309: need_resched <-mwait_idle
           <idle>-0     [000] d..2    49.309310: test_ti_thread_flag <-need_resched
           <idle>-0     [000] d..2    49.309312: trace_power_start.constprop.13 <-mwait_idle
           <idle>-0     [000] d..2    49.309313: trace_cpu_idle <-mwait_idle
           <idle>-0     [000] d..2    49.309315: need_resched <-mwait_idle

If a user wants the old format, they can disable the 'irq-info' option:

 # tracer: nop
 #
 #           TASK-PID   CPU#      TIMESTAMP  FUNCTION
 #              | |       |          |         |
           <idle>-0     [000]     49.309305: cpuidle_get_driver <-cpuidle_idle_call
           <idle>-0     [000]     49.309307: mwait_idle <-cpu_idle
           <idle>-0     [000]     49.309309: need_resched <-mwait_idle
           <idle>-0     [000]     49.309310: test_ti_thread_flag <-need_resched
           <idle>-0     [000]     49.309312: trace_power_start.constprop.13 <-mwait_idle
           <idle>-0     [000]     49.309313: trace_cpu_idle <-mwait_idle
           <idle>-0     [000]     49.309315: need_resched <-mwait_idle

Requested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
13 years agoperf session: Fix crash with invalid CPU list
David Ahern [Sun, 13 Nov 2011 17:45:27 +0000 (10:45 -0700)]
perf session: Fix crash with invalid CPU list

commit 5d67be9 added the option to specify a range of CPUs of interest,
but does not catch an invalid CPU list:

$ perf script -c foo
Segmentation fault (core dumped)

Cc: Anton Blanchard <anton@samba.org>
Link: http://lkml.kernel.org/r/1321206327-5881-1-git-send-email-dsahern@gmail.com
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoperf python: Fix undefined symbol problem
Arnaldo Carvalho de Melo [Fri, 4 Nov 2011 10:16:58 +0000 (08:16 -0200)]
perf python: Fix undefined symbol problem

Recently we made perf_evsel__init call hists__init, which broke the perf
python binding:

[root@emilia linux]# ./tools/perf/python/twatch.py
Traceback (most recent call last):
  File "./tools/perf/python/twatch.py", line 16, in <module>
    import perf
ImportError: /home/acme/git/build/perf/python/perf.so: undefined symbol: hists__init

Fix it by moving the hists__init function to its only caller, evsel.c.

This way we avoid dragging in other parts of tools/perf/util/ to the
perf python binding.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-5nffmdt5mu6ozxgj54oi4qon@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
13 years agoMerge branch 'core' of git://amd64.org/linux/rric into perf/core
Ingo Molnar [Tue, 15 Nov 2011 10:05:18 +0000 (11:05 +0100)]
Merge branch 'core' of git://amd64.org/linux/rric into perf/core

13 years agoMerge branch 'urgent' of git://amd64.org/linux/rric into perf/urgent
Ingo Molnar [Tue, 15 Nov 2011 10:03:30 +0000 (11:03 +0100)]
Merge branch 'urgent' of git://amd64.org/linux/rric into perf/urgent

13 years agoevents: Don't divide events if it has field period
Andrew Vagin [Mon, 7 Nov 2011 12:54:12 +0000 (15:54 +0300)]
events: Don't divide events if it has field period

This patch solves the following problem:

Now some samples may be lost due to throttling. The number of samples is
restricted by sysctl_perf_event_sample_rate/HZ.  A trace event is
divided on some samples according to event's period.  I don't sure, that
we should generate more than one sample on each trace event. I think the
better way to use SAMPLE_PERIOD.

E.g.: I want to trace when a process sleeps. I created a process, which
sleeps for 1ms and for 4ms.  perf got 100 events in both cases.

swapper     0 [000]  1141.371830: sched_stat_sleep: comm=foo pid=1801 delay=1386750 [ns]
swapper     0 [000]  1141.369444: sched_stat_sleep: comm=foo pid=1801 delay=4499585 [ns]

In the first case a kernel want to send 4499585 events and
in the second case it wants to send 1386750 events.
perf-reports shows that process sleeps in both places equal time. It's
bug.

With this patch kernel generates one event on each "sleep" and the time
slice is saved in the field "period". Perf knows how handle it.

Signed-off-by: Andrew Vagin <avagin@openvz.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1320670457-2633428-3-git-send-email-avagin@openvz.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf: Carve out callchain functionality
Borislav Petkov [Sun, 16 Oct 2011 15:15:04 +0000 (17:15 +0200)]
perf: Carve out callchain functionality

Split the callchain code from the perf events core into
a new kernel/events/callchain.c file.

This simplifies a bit the big core.c

Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
[keep ctx recursion handling inline and use internal headers]
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1318778104-17152-1-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf/x86: Enable raw event access to Intel offcore events
Peter Zijlstra [Mon, 14 Nov 2011 09:03:25 +0000 (10:03 +0100)]
perf/x86: Enable raw event access to Intel offcore events

Now that the core offcore support is fixed up (thanks Stephane) and we
have sane generic events utilizing them, re-enable the raw access to
the feature as well.

Note that it doesn't matter if you use event 0x1b7 or 0x1bb to specify
an offcore event, either one works and neither guarantees you'll end
up on a particular offcore MSR.

Based on original patch from: Vince Weaver <vweaver1@eecs.utk.edu>.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Vince Weaver <vweaver1@eecs.utk.edu>.
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.00.1108031200390.703@cl320.eecs.utk.edu
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf: Don't use -ENOSPC for out of PMU resources
Peter Zijlstra [Wed, 9 Nov 2011 16:56:37 +0000 (17:56 +0100)]
perf: Don't use -ENOSPC for out of PMU resources

People (Linus) objected to using -ENOSPC to signal not having enough
resources on the PMU to satisfy the request. Use -EINVAL.

Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: David Daney <david.daney@cavium.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-xv8geaz2zpbjhlx0svmpp28n@git.kernel.org
[ merged to newer kernel, fixed up MIPS impact ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf: Do not set task_ctx pointer in cpuctx if there are no events in the context
Gleb Natapov [Sun, 23 Oct 2011 17:10:33 +0000 (19:10 +0200)]
perf: Do not set task_ctx pointer in cpuctx if there are no events in the context

Do not set task_ctx pointer during sched_in if there are no
events associated with the context.  Otherwise if during task
execution total number of events in the system will become zero
perf_event_context_sched_out() will not be called and cpuctx->task_ctx
will be left with a stale value.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20111023171033.GI17571@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoperf/x86: Fix PEBS instruction unwind
Peter Zijlstra [Fri, 7 Oct 2011 11:36:40 +0000 (13:36 +0200)]
perf/x86: Fix PEBS instruction unwind

Masami spotted that we always try to decode the instruction stream as
64bit instructions when running a 64bit kernel, this doesn't work for
ia32-compat proglets.

Use TIF_IA32 to detect if we need to use the 32bit instruction
decoder.

Reported-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: stable@kernel.org
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
13 years agoMerge branch 'rmobile-fixes-for-linus' of git://github.com/pmundt/linux-sh
Linus Torvalds [Mon, 14 Nov 2011 08:47:04 +0000 (06:47 -0200)]
Merge branch 'rmobile-fixes-for-linus' of git://github.com/pmundt/linux-sh

* 'rmobile-fixes-for-linus' of git://github.com/pmundt/linux-sh:
  ARM: mach-shmobile: cpuidle single/global and last_state fixes
  ARM: mach-shmobile: move helper macro PORTCR to sh_pfc.h
  ARM: mach-shmobile: move helper macro PORT_xx to sh_pfc.h
  ARM: mach-shmobile: move helper macro PORT_DATA_xx to sh_pfc.h
  ARM: mach-shmobile: ap4evb: remove white space from end of line
  ARM: mach-shmobile: clock-sh7372: remove un-necessary index
  ARM: mach-shmobile: kota2: add comment out separator
  ARM: mach-shmobile: sh73a0: add MMC data pin pull-up

13 years agoMerge branch 'sh-fixes-for-linus' of git://github.com/pmundt/linux-sh
Linus Torvalds [Mon, 14 Nov 2011 08:45:30 +0000 (06:45 -0200)]
Merge branch 'sh-fixes-for-linus' of git://github.com/pmundt/linux-sh

* 'sh-fixes-for-linus' of git://github.com/pmundt/linux-sh:
  mailmap: Fix up some renesas attributions
  sh: clkfwk: Kill off remaining debugfs cruft.
  drivers: sh: Kill off dead pathname for runtime PM stub.
  drivers: sh: Generalize runtime PM platform stub.
  sh: Wire up process_vm syscalls.
  sh: clkfwk: add clk_rate_mult_range_round()
  serial: sh-sci: Fix up SH-2A SCIF support.
  sh: Fix cached/uncaced address calculation in 29bit mode

13 years agoMerge git://github.com/rustyrussell/linux
Linus Torvalds [Mon, 14 Nov 2011 06:00:01 +0000 (04:00 -0200)]
Merge git://github.com/rustyrussell/linux

* git://github.com/rustyrussell/linux:
  virtio-pci: fix use after free

13 years agovirtio-pci: fix use after free
Michael S. Tsirkin [Mon, 7 Nov 2011 16:37:05 +0000 (18:37 +0200)]
virtio-pci: fix use after free

Commit 31a3ddda166cda86d2b5111e09ba4bda5239fae6 introduced
a use after free in virtio-pci. The main issue is
that the release method signals removal of the virtio device,
while remove signals removal of the pci device.

For example, on driver removal or hot-unplug,
virtio_pci_release_dev is called before virtio_pci_remove.
We then might get a crash as virtio_pci_remove tries to use the
device freed by virtio_pci_release_dev.

We allocate/free all resources together with the
pci device, so we can leave the release method empty.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: stable@kernel.org
13 years agoMerge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Linus Torvalds [Sun, 13 Nov 2011 19:09:55 +0000 (17:09 -0200)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux

* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
  drm/radeon/kms/combios: fix dynamic allocation of PM clock modes

13 years agoACPI / cpuidle: Remove acpi_idle_suspend (to fix suspend regression)
Rafael J. Wysocki [Sat, 12 Nov 2011 22:17:27 +0000 (23:17 +0100)]
ACPI / cpuidle: Remove acpi_idle_suspend (to fix suspend regression)

After commit e978aa7d7d57 ("cpuidle: Move dev->last_residency update to
driver enter routine; remove dev->last_state") setting acpi_idle_suspend
to 1 by acpi_processor_suspend() causes the ACPI cpuidle routines to
return error codes continuously, which in turn causes cpuidle to lock up
(hard).

However, acpi_idle_suspend doesn't appear to be useful for any
particular purpose (it's racy and doesn't really provide any real
protection), so it can be removed, which makes the problem go away.

Reported-and-tested-by: Tomas M. <tmezzadra@gmail.com>
Reported-and-tested-by: Ferenc Wagner <wferi@niif.hu>
Tested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
13 years agodrm/radeon/kms/combios: fix dynamic allocation of PM clock modes
Alex Deucher [Sat, 12 Nov 2011 16:57:29 +0000 (11:57 -0500)]
drm/radeon/kms/combios: fix dynamic allocation of PM clock modes

I missed the combios path when I updated the atombios pm code.

Reported by amarsh04 on IRC.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
13 years agoMerge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Linus Torvalds [Sat, 12 Nov 2011 15:29:04 +0000 (13:29 -0200)]
Merge branch 'fixes' of git://git./linux/kernel/git/arm/arm-soc

* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
  arm/imx: fix imx6q mmc error when mounting rootfs
  arm/imx: fix AUTO_ZRELADDR selection
  arm/imx: fix the references to ARCH_MX3
  ARM: mx51/53: set pwm clock parent to ipg_perclk
  arm/tegra: enable headphone detection gpio on seaboard
  arm/dt: Fix ventana SDHCI power-gpios
  arm/tegra: Don't create duplicate gpio and pinmux devices
  ARM: at91: Fix USBA gadget registration
  atmel/spi: fix missing probe
  at91/yl-9200: Fix section mismatch
  at91: vmalloc fix missing AT91_VIRT_BASE define
  ARM: at91: usart: drop static map regs for dbgu
  ARM: picoxcell: add extra temp register to addruart
  ARM: msm: fix compilation flags for MSM_SCM
  arm/mxs: fix mmc device adding for mach-mx28evk
  ARM: mxc: Remove test_for_ltirq
  ARM:i.MX: fix build error in clock-mx51-mx53.c
  ARM:i.MX: fix build error in tzic/avic.c
  ARM: mxc: fix local timer interrupt handling
  msm: boards: Fix fallout from removal of machine_desc in fixup