Peter Zijlstra [Fri, 26 Feb 2010 16:07:35 +0000 (17:07 +0100)]
perf_event, amd: Fix spinlock initialization
Avoid kernels from exploding on AMD machines when they have any
lock debugging bits enabled.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 26 Feb 2010 15:36:23 +0000 (16:36 +0100)]
perf_event: Fix preempt warning in perf_clock()
A recent commit introduced a preemption warning for
perf_clock(), use raw_smp_processor_id() to avoid this, it
really doesn't matter which cpu we use here.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
1267198583.22519.684.camel@laptop>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
David S. Miller [Fri, 26 Feb 2010 15:08:34 +0000 (12:08 -0300)]
perf tools: Flush maps on COMM events
Even though we don't register the counters until the child is right about
to exec(), we're still going to get at least a few events while the
fork()'d child is still executing 'perf' and in particular we're going to
get the MMAP events.
We can't distinguish the ones in the newly executed process because the
PID will be the same.
One way to solve this would be to have a PERF_RECORD_EXEC event, and when
this is seen 'perf' can flush it's map cache. We can't use
PERF_RECORD_COMM since that's generated by other things, not just exec().
Actually, thinking about it some more, using PERF_RECORD_COMM might be a
good enough approximation.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1267196914-16238-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 26 Feb 2010 11:05:05 +0000 (12:05 +0100)]
perf_events, x86: Split PMU definitions into separate files
Split amd,p6,intel into separate files so that we can easily deal with
CONFIG_CPU_SUP_* things, needed to make things build now that perf_event.c
relies on symbols from amd.c
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Fri, 26 Feb 2010 14:23:14 +0000 (11:23 -0300)]
perf annotate: Handle samples not at objdump output addr boundaries
Without this patch we get this for need_resched:
[root@mica ~]# perf annotate need_resched
------------------------------------------------
Percent | Source code & Disassembly of vmlinux
------------------------------------------------
:
:
: Disassembly of section .text:
:
:
ffffffff810095ed <need_resched>:
: return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p);
: }
:
: static inline int need_resched(void)
: {
0.00 :
ffffffff810095ed: 55 push %rbp
: return unlikely(test_thread_flag(TIF_NEED_RESCHED));
0.00 :
ffffffff810095ee: be 03 00 00 00 mov $0x3,%esi
:
: static inline struct thread_info *current_thread_info(void)
: {
: struct thread_info *ti;
: ti = (void *)(percpu_read_stable(kernel_stack) +
0.00 :
ffffffff810095f3: 65 48 8b 3c 25 48 b5 mov %gs:0xb548,%rdi
0.00 :
ffffffff810095fa: 00 00
: return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p);
: }
:
: static inline int need_resched(void)
: {
0.00 :
ffffffff810095fc: 48 89 e5 mov %rsp,%rbp
: return unlikely(test_thread_flag(TIF_NEED_RESCHED));
0.00 :
ffffffff810095ff: 48 81 ef d8 1f 00 00 sub $0x1fd8,%rdi
0.00 :
ffffffff81009606: e8 9d ff ff ff callq
ffffffff810095a8 <test_ti_thread_flag>
: }
0.00 :
ffffffff8100960b: c9 leaveq
0.00 :
ffffffff8100960c: 85 c0 test %eax,%eax
0.00 :
ffffffff8100960e: 0f 95 c0 setne %al
0.00 :
ffffffff81009611: 0f b6 c0 movzbl %al,%eax
: Disassembly of section .vsyscall_0:
: Disassembly of section .vsyscall_fn:
: Disassembly of section .vsyscall_1:
: Disassembly of section .vsyscall_2:
: Disassembly of section .init.text:
: Disassembly of section .altinstr_replacement:
: Disassembly of section .exit.text:
[root@mica ~]#
But from the 'perf report' result we know that there are hits
for need_resched on a 4 way machine mostly doing nothing, so
after adding code to show what is in each hist offset and
collapsing IP hits for what happens between objdump lines we
get, for the same perf.data file:
[root@mica ~]# perf annotate -v need_resched
------------------------------------------------
Percent | Source code & Disassembly of vmlinux
------------------------------------------------
:
:
: Disassembly of section .text:
:
:
ffffffff810095ed <need_resched>:
: return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p);
: }
:
: static inline int need_resched(void)
: {
0.00 :
ffffffff810095ed: 55 push %rbp
: return unlikely(test_thread_flag(TIF_NEED_RESCHED));
52.78 :
ffffffff810095ee: be 03 00 00 00 mov $0x3,%esi
:
: static inline struct thread_info *current_thread_info(void)
: {
: struct thread_info *ti;
: ti = (void *)(percpu_read_stable(kernel_stack) +
0.00 :
ffffffff810095f3: 65 48 8b 3c 25 48 b5 mov %gs:0xb548,%rdi
0.00 :
ffffffff810095fa: 00 00
: return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p);
: }
:
: static inline int need_resched(void)
: {
0.00 :
ffffffff810095fc: 48 89 e5 mov %rsp,%rbp
: return unlikely(test_thread_flag(TIF_NEED_RESCHED));
9.72 :
ffffffff810095ff: 48 81 ef d8 1f 00 00 sub $0x1fd8,%rdi
0.00 :
ffffffff81009606: e8 9d ff ff ff callq
ffffffff810095a8 <test_ti_thread_flag>
: }
0.00 :
ffffffff8100960b: c9 leaveq
0.00 :
ffffffff8100960c: 85 c0 test %eax,%eax
37.50 :
ffffffff8100960e: 0f 95 c0 setne %al
0.00 :
ffffffff81009611: 0f b6 c0 movzbl %al,%eax
: Disassembly of section .vsyscall_0:
: Disassembly of section .vsyscall_fn:
: Disassembly of section .vsyscall_1:
: Disassembly of section .vsyscall_2:
: Disassembly of section .init.text:
: Disassembly of section .altinstr_replacement:
: Disassembly of section .exit.text:
[root@mica ~]#
And now 'perf annotate -v', verbose mode, will show the hits per
precise IP, so that one can make sense of the attribution to
each objdumop line:
[root@mica ~]# perf annotate -v need_resched
Looking at the vmlinux_path (5 entries long)
Using /lib/modules/
2.6.33-rc8-tip-00784-g3471df5-dirty/build/vmlinux
for symbols annotate_sym: filename=/lib/modules/
2.6.33-rc8-tip-00784-g3471df5-dirty/build/vmlinux, sym=need_resched, start=0xffffffff810095ed, end=0xffffffff81009614
------------------------------------------------
Percent | Source code & Disassembly of vmlinux
------------------------------------------------
ffffffff810095f1: 152
ffffffff81009603: 28
ffffffff8100960f: 55
ffffffff81009610: 53
h->sum: 288
<SNIP same annotation>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Miller <davem@davemloft.net>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1267194194-15670-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Wed, 10 Feb 2010 15:10:48 +0000 (16:10 +0100)]
perf_events, x86: Remove superflous MSR writes
We re-program the event control register every time we reset the count,
this appears to be superflous, hence remove it.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arjan van de Ven <arjan@linux.intel.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Thu, 11 Feb 2010 12:21:58 +0000 (13:21 +0100)]
perf_events: Simplify code by removing cpu argument to hw_perf_group_sched_in()
Since the cpu argument to hw_perf_group_sched_in() is always
smp_processor_id(), simplify the code a little by removing this argument
and using the current cpu where needed.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: David Miller <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1265890918.5396.3.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Stephane Eranian [Mon, 8 Feb 2010 15:17:01 +0000 (17:17 +0200)]
perf_events, x86: AMD event scheduling
This patch adds correct AMD NorthBridge event scheduling.
NB events are events measuring L3 cache, Hypertransport traffic. They are
identified by an event code >= 0xe0. They measure events on the
Northbride which is shared by all cores on a package. NB events are
counted on a shared set of counters. When a NB event is programmed in a
counter, the data actually comes from a shared counter. Thus, access to
those counters needs to be synchronized.
We implement the synchronization such that no two cores can be measuring
NB events using the same counters. Thus, we maintain a per-NB allocation
table. The available slot is propagated using the event_constraint
structure.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
4b703957.
0702d00a.6bf2.7b7d@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Stephane Eranian [Mon, 8 Feb 2010 15:06:01 +0000 (17:06 +0200)]
perf_events: Add new start/stop PMU callbacks
In certain situations, the kernel may need to stop and start the same
event rapidly. The current PMU callbacks do not distinguish between stop
and release (i.e., stop + free the resource). Thus, a counter may be
released, then it will be immediately re-acquired. Event scheduling will
again take place with no guarantee to assign the same counter. On some
processors, this may event yield to failure to assign the event back due
to competion between cores.
This patch is adding a new pair of callback to stop and restart a counter
without actually release the underlying counter resource. On stop, the
counter is stopped, its values saved and that's it. On start, the value
is reloaded and counter is restarted (on x86, actual restart is delayed
until perf_enable()).
Signed-off-by: Stephane Eranian <eranian@google.com>
[ added fallback to ->enable/->disable for all other PMUs
fixed x86_pmu_start() to call x86_pmu.enable()
merged __x86_pmu_disable into x86_pmu_stop() ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
4b703875.
0a04d00a.7896.
ffffb824@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 26 Feb 2010 09:33:41 +0000 (10:33 +0100)]
perf_events: Report the MMAP pgoff value in bytes
DaveM reported that currently perf interprets the pgoff value reported by
the MMAP events as a byte range, but the kernel reports it as a page
offset.
Since its broken (and unusable) anyway, change the kernel behaviour (ABI)
to report bytes indeed, avoiding the need for userspace to deal with
PAGE_SIZE things.
Reported-by: David Miller <davem@davemloft.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Thu, 25 Feb 2010 15:57:40 +0000 (12:57 -0300)]
perf annotate: Defer allocating sym_priv->hist array
Because symbol->end is not fixed up at symbol_filter time, only
after all symbols for a DSO are loaded, and that, for asm
symbols, may be bogus, causing segfaults when hits happen in
these symbols.
Reported-by: David Miller <davem@davemloft.net>
Reported-by: Anton Blanchard <anton@samba.org>
Acked-by: David Miller <davem@davemloft.net>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: <stable@kernel.org> # for .33.x. Does not apply cleanly, needs backport.
LKML-Reference: <
20100225155740.GB8553@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Mon, 22 Feb 2010 19:15:39 +0000 (16:15 -0300)]
perf symbols: Improve debugging information about symtab origins
Be more clear about DSO long names and tell from which file
kernel symbols were obtained, all in --verbose mode:
[root@mica ~]# perf report -v > /dev/null
Looking at the vmlinux_path (5 entries long)
Using /lib/modules/
2.6.33-rc8-tip-00777-g0918527-dirty/build/vmlinux for symbols
[root@mica ~]# mv /lib/modules/
2.6.33-rc8-tip-00777-g0918527-dirty/build/vmlinux /tmp/dd
[root@mica ~]# perf report -v > /dev/null
Looking at the vmlinux_path (5 entries long)
Using /proc/kallsyms for symbols
[root@mica ~]#
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1266866139-6361-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Mon, 22 Feb 2010 19:14:22 +0000 (16:14 -0300)]
perf top: Use a macro instead of a constant variable
To overcome a silly gcc warning:
cc1: warnings being treated as errors
builtin-top.c: In function ‘lookup_sym_source’:
builtin-top.c:291: warning: not protecting local variables:
variable length buffer make: *** [builtin-top.o] Error 1
make: *** Waiting for unfinished jobs....
That is emitted for this:
const size_t pattern_len = BITS_PER_LONG / 4 + 2;
char pattern[pattern_len + 1];
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1266866062-6287-1-git-send-email-acme@infradead.org>
[ -v2: macroify the naming style ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Zhang, Yanmin [Thu, 25 Feb 2010 03:00:51 +0000 (11:00 +0800)]
perf symbols: Check the right return variable
In function dso__split_kallsyms(), curr_map saves the return value
of map__new2. So check it instead of var map after the call returns.
Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Acked-by: David S. Miller <davem@davemloft.net>
Cc: <stable@kernel.org> # for .33.x
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1267066851.1726.9.camel@localhost>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Frederic Weisbecker [Thu, 25 Feb 2010 02:03:52 +0000 (03:03 +0100)]
perf/scripts: Tag syscall_name helper as not yet available
syscall_name() helper, which resolves a syscall arch number to
its name, is not yet available as we first need to implement
event injection for it to work.
Remove it from the documentation or tag its references as
unavailable yet. Once it's implemented, we can just revert
the current patch.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Tom Zanussi [Wed, 27 Jan 2010 08:28:03 +0000 (02:28 -0600)]
perf/scripts: Add perf-trace-python Documentation
Also small update to perf-trace-perl and perf-trace docs.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-13-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Mon, 22 Feb 2010 07:12:59 +0000 (01:12 -0600)]
perf/scripts: Remove unnecessary PyTuple resizes
If we know the size of a tuple in advance, there's no need to resize
it - start out with the known size in the first place.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1266822779.6426.4.camel@tropicana>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:58 +0000 (02:27 -0600)]
perf/scripts: Add syscall tracing scripts
Adds a set of scripts that aggregate system call totals and system
call errors. Most are Python scripts that also test basic
functionality of the new Python engine, but there's also one Perl
script added for comparison and for reference in some new
Documentation contained in a later patch.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-8-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:57 +0000 (02:27 -0600)]
perf/scripts: Add Python scripting engine
Add base support for Python scripting to perf trace.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-6-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:56 +0000 (02:27 -0600)]
perf/scripts: Remove check-perf-trace from listed scripts
The check-perf-trace script only checks Perl functionality, and
doesn't really need to be listed as as user script anyway.
This only removes the '-report' shell script, so although it doesn't
appear in the listing, the '-record' shell script and the check perf
trace perl script itself is still available and can still be run
manually as such:
$ libexec/perf-core/scripts/perl/bin/check-perf-trace-record
$ perf trace -s libexec/perf-core/scripts/perl/check-perf-trace.pl
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-6-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:55 +0000 (02:27 -0600)]
perf/scripts: Move Perl scripting files to scripting-engines dir
Create a scripting-engines directory to contain scripting engine
implementation code, in anticipation of the addition of new scripting
support. Also removes trace-event-perl.h.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-5-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:54 +0000 (02:27 -0600)]
perf/scripts: Move common code out of Perl-specific files
This stuff is needed by all scripting engines; move it from the Perl
engine source to a more common place.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-4-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:53 +0000 (02:27 -0600)]
perf/scripts: Fix bug in Util.pm
Fix bogus calculation.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-3-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Tom Zanussi [Wed, 27 Jan 2010 08:27:52 +0000 (02:27 -0600)]
perf/scripts: Fix supported language listing option
'perf trace -s list' prints a list of the supported scripting
languages. One problem with it is that it falls through and prints
the trace as well. The use of 'list' for this also makes it easy to
confuse with 'perf trace -l', used for listing available scripts. So
change 'perf trace -s list' to 'perf trace -s lang' and fixes the
fall-through problem.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Keiichi KII <k-keiichi@bx.jp.nec.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <
1264580883-15324-2-git-send-email-tzanussi@gmail.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Arnaldo Carvalho de Melo [Sat, 20 Feb 2010 01:02:07 +0000 (23:02 -0200)]
perf tools: Don't use parent comm if not set at fork time
As the parent comm then is worthless, confusing users about the
thread where the sample really happened, leading to think that
the sample happened in the parent, not where it really happened,
in the children of a thread for which a PERF_RECORD_COMM event
was not received.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1266627727-19715-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Sat, 20 Feb 2010 21:53:13 +0000 (19:53 -0200)]
perf symbols: Fix up map end too on modular kernels with no modules installed
In
2161db9 we stopped failing when not finding modules when
asked too, but then the kernel maps (just one, for vmlinux)
wasn't having its ->end field correctly set up, so symbols were
not being found for the vmlinux map because its range was 0-0.
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1266702793-29434-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
austin_zhang@linux.intel.com [Fri, 5 Feb 2010 17:02:42 +0000 (09:02 -0800)]
perf record: Fix existing process callgraph symbol
When 'perf record -g' a existing process, even with debuginfo
packages, still cannnot get symbol from 'perf report'.
try:
perf record -g -p `pidof xxx` -f
perf report
68.26% :1181
b74870f2 [.] 0x000000b74870f2
|
|--32.09%-- 0xb73b5b44
| 0xb7487102
| 0xb748a4e2
| 0xb748633d
| 0xb73b41cd
| 0xb73b4467
| 0xb747d531
The reason is: for existing process, in __cmd_record(),
the pid is 0 rather than the existing process id.
Signed-off-by: Austin Zhang <austin_zhang@linux.intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <4710.10.255.24.35.
1265389362.squirrel@linux.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Masami Hiramatsu [Fri, 5 Feb 2010 17:16:47 +0000 (12:16 -0500)]
x86/alternatives: Fix build warning
Fixes these warnings:
arch/x86/kernel/alternative.c: In function 'alternatives_text_reserved':
arch/x86/kernel/alternative.c:402: warning: comparison of distinct pointer types lacks a cast
arch/x86/kernel/alternative.c:402: warning: comparison of distinct pointer types lacks a cast
arch/x86/kernel/alternative.c:405: warning: comparison of distinct pointer types lacks a cast
arch/x86/kernel/alternative.c:405: warning: comparison of distinct pointer types lacks a cast
Caused by:
2cfa197: ftrace/alternatives: Introducing *_text_reserved functions
Changes in v2:
- Use local variables to compare, instead of type casts.
Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
LKML-Reference: <
20100205171647.15750.37221.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Sun, 7 Feb 2010 13:46:16 +0000 (11:46 -0200)]
perf top: Use address pattern in lookup_sym_source
Because we may have aliases, like __GI___strcoll_l in
/lib64/libc-2.10.2.so that appears in objdump as:
$ objdump --start-address=0x0000003715a86420 \
--stop-address=0x0000003715a872dc -dS /lib64/libc-2.10.2.so
0000003715a86420 <__strcoll_l>:
3715a86420: 55 push %rbp
3715a86421: 48 89 e5 mov %rsp,%rbp
3715a86424: 41 57 push %r15
[root@doppio linux-2.6-tip]#
So look for the address exactly at the start of the line instead
so that annotation can work for in these cases.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265550376-12665-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Kirill Smelkov [Sun, 7 Feb 2010 13:46:15 +0000 (11:46 -0200)]
perf top: Fix annotate for userspace
First, for programs and prelinked libraries, annotate code was
fooled by objdump output IPs (src->eip in the code) being
wrongly converted to absolute IPs. In such case there were no
conversion needed, but in
src->eip = strtoull(src->line, NULL, 16);
src->eip = map->unmap_ip(map, src->eip); // = eip + map->start - map->pgoff
we were reading absolute address from objdump (e.g.
8048604) and
then almost doubling it, because eip & map->start are
approximately close for small programs.
Needless to say, that later, in record_precise_ip() there was no
matching with real runtime IPs.
And second, like with `perf annotate` the problem with
non-prelinked *.so was that we were doing rip -> objdump address
conversion wrong.
Also, because unlike `perf annotate`, `perf top` code does
annotation based on absolute IPs for performance reasons(*), new
helper for mapping objdump addresse to IP is introduced.
(*) we get samples info in absolute IPs, and since we do lots of
hit-testing on absolute IPs at runtime in record_precise_ip(), it's
better to convert objdump addresses to IPs once and do no conversion
at runtime.
I also had to fix how objdump output is parsed (with hardcoded
8/16 characters format, which was inappropriate for ET_DYN dsos
with small addresses like '4ac')
Also note, that not all objdump output lines has associtated
IPs, e.g. look at source lines here:
000004ac <my_strlen>:
extern "C"
int my_strlen(const char *s)
4ac: 55 push %ebp
4ad: 89 e5 mov %esp,%ebp
4af: 83 ec 10 sub $0x10,%esp
{
int len = 0;
4b2: c7 45 fc 00 00 00 00 movl $0x0,-0x4(%ebp)
4b9: eb 08 jmp 4c3 <my_strlen+0x17>
while (*s) {
++len;
4bb: 83 45 fc 01 addl $0x1,-0x4(%ebp)
++s;
4bf: 83 45 08 01 addl $0x1,0x8(%ebp)
So we mark them with eip=0, and ignore such lines in annotate
lookup code.
Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru>
[ Note: one hunk of this patch was applied by Mike in
57d8188 ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
1265550376-12665-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Masami Hiramatsu [Fri, 5 Feb 2010 06:24:34 +0000 (01:24 -0500)]
kprobes: Add mcount to the kprobes blacklist
Since mcount function can be called from everywhere,
it should be blacklisted. Moreover, the "mcount" symbol
is a special symbol name. So, it is better to put it in
the generic blacklist.
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <
20100205062433.3745.36726.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Ingo Molnar [Thu, 4 Feb 2010 09:22:01 +0000 (10:22 +0100)]
perf tools: Fix session init on non-modular kernels
perf top and perf record refuses to initialize on non-modular kernels:
refuse to initialize:
$ perf top -v
map_groups__set_modules_path_dir: cannot open /lib/modules/
2.6.33-rc6-tip-00586-g398dde3-dirty/
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Xiao Guangrong [Thu, 4 Feb 2010 08:46:42 +0000 (16:46 +0800)]
perf tools: Clean up O_LARGEFILE et al usage
Setting _FILE_OFFSET_BITS and using O_LARGEFILE, lseek64, etc,
is redundant. Thanks H. Peter Anvin for pointing it out.
So, this patch removes O_LARGEFILE, lseek64, etc.
Suggested-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
4B6A8972.
3070605@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Stephane Eranian [Mon, 1 Feb 2010 12:50:01 +0000 (14:50 +0200)]
perf_events, x86: Fix bug in hw_perf_enable()
We cannot assume that because hwc->idx == assign[i], we can avoid
reprogramming the counter in hw_perf_enable().
The event may have been scheduled out and another event may have been
programmed into this counter. Thus, we need a more robust way of
verifying if the counter still contains config/data related to an event.
This patch adds a generation number to each counter on each cpu. Using
this mechanism we can verify reliabilty whether the content of a counter
corresponds to an event.
Signed-off-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
4b66dc67.
0b38560a.1635.
ffffae18@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 29 Jan 2010 12:25:12 +0000 (13:25 +0100)]
bitops: Ensure the compile time HWEIGHT is only used for such
Avoid accidental misuse by failing to compile things
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 29 Jan 2010 12:25:31 +0000 (13:25 +0100)]
perf_events, x86: Implement intel core solo/duo support
Implement Intel Core Solo/Duo, aka.
Intel Architectural Performance Monitoring Version 1.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Thu, 28 Jan 2010 12:57:44 +0000 (13:57 +0100)]
perf_events: Optimize perf_event_task_tick()
Pretty much all of the calls do perf_disable/perf_enable cycles, pull
that out to cut back on hardware programming.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Masami Hiramatsu [Tue, 2 Feb 2010 21:49:25 +0000 (16:49 -0500)]
ftrace: Remove record freezing
Remove record freezing. Because kprobes never puts probe on
ftrace's mcount call anymore, it doesn't need ftrace to check
whether kprobes on it.
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
20100202214925.4694.73469.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Masami Hiramatsu [Tue, 2 Feb 2010 21:49:18 +0000 (16:49 -0500)]
kprobes: Check probe address is reserved
Check whether the address of new probe is already reserved by
ftrace or alternatives (on x86) when registering new probe.
If reserved, it returns an error and not register the probe.
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Jason Baron <jbaron@redhat.com>
LKML-Reference: <
20100202214918.4694.94179.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Masami Hiramatsu [Tue, 2 Feb 2010 21:49:11 +0000 (16:49 -0500)]
ftrace/alternatives: Introducing *_text_reserved functions
Introducing *_text_reserved functions for checking the text
address range is partially reserved or not. This patch provides
checking routines for x86 smp alternatives and dynamic ftrace.
Since both functions modify fixed pieces of kernel text, they
should reserve and protect those from other dynamic text
modifier, like kprobes.
This will also be extended when introducing other subsystems
which modify fixed pieces of kernel text. Dynamic text modifiers
should avoid those.
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Jason Baron <jbaron@redhat.com>
LKML-Reference: <
20100202214911.4694.16587.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Masami Hiramatsu [Tue, 2 Feb 2010 21:49:04 +0000 (16:49 -0500)]
kprobes: Disable booster when CONFIG_PREEMPT=y
Disable kprobe booster when CONFIG_PREEMPT=y at this time,
because it can't ensure that all kernel threads preempted on
kprobe's boosted slot run out from the slot even using
freeze_processes().
The booster on preemptive kernel will be resumed if
synchronize_tasks() or something like that is introduced.
Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: systemtap <systemtap@sources.redhat.com>
Cc: DLE <dle-develop@lists.sourceforge.net>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jim Keniston <jkenisto@us.ibm.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <
20100202214904.4694.24330.stgit@dhcp-100-2-132.bos.redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Mike Galbraith [Thu, 4 Feb 2010 06:31:46 +0000 (07:31 +0100)]
perf annotate: Fix perf top module symbol annotation
Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1265265106.6364.5.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Kirill Smelkov [Wed, 3 Feb 2010 18:52:08 +0000 (16:52 -0200)]
perf top: Teach it to autolocate vmlinux
By relying on logic in dso__load_kernel_sym(), we can
automatically load vmlinux.
The only thing which needs to be adjusted, is how --sym-annotate
option is handled - now we can't rely on vmlinux been loaded
until full successful pass of dso__load_vmlinux(), but that's
not the case if we'll do sym_filter_entry setup in
symbol_filter().
So move this step right after event__process_sample() where we
know the whole dso__load_kernel_sym() pass is done.
By the way, though conceptually similar `perf top` still can't
annotate userspace - see next patches with fixes.
Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
1265223128-11786-9-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Kirill Smelkov [Wed, 3 Feb 2010 18:52:07 +0000 (16:52 -0200)]
perf annotate: Fix it for non-prelinked *.so
The problem was we were incorrectly calculating objdump
addresses for sym->start and sym->end, look:
For simple ET_DYN type DSO (*.so) with one function, objdump -dS
output is something like this:
000004ac <my_strlen>:
int my_strlen(const char *s)
4ac: 55 push %ebp
4ad: 89 e5 mov %esp,%ebp
4af: 83 ec 10 sub $0x10,%esp
{
i.e. we have relative-to-dso-mapping IPs (=RIP) there.
For ET_EXEC type and probably for prelinked libs as well (sorry
can't test - I don't use prelink) objdump outputs absolute IPs,
e.g.
08048604 <zz_strlen>:
extern "C"
int zz_strlen(const char *s)
8048604: 55 push %ebp
8048605: 89 e5 mov %esp,%ebp
8048607: 83 ec 10 sub $0x10,%esp
{
So, if sym->start is always relative to dso mapping(*), we'll
have to unmap it for ET_EXEC like cases, and leave as is for
ET_DYN cases.
(*) and it is - we've explicitely made it relative. Look for
adjust_symbols handling in dso__load_sym()
Previously we were always unmapping sym->start and for ET_DYN
dsos resulting addresses were wrong, and so objdump output was
empty.
The end result was that perf annotate output for symbols from
non-prelinked *.so had always 0.00% percents only, which is
wrong.
To fix it, let's introduce a helper for converting rip to
objdump address, and also let's document what map_ip() and
unmap_ip() do -- I had to study sources for several hours to
understand it.
Signed-off-by: Kirill Smelkov <kirr@landau.phys.spbu.ru>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <
1265223128-11786-8-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:06 +0000 (16:52 -0200)]
perf tools: Adjust some verbosity levels
Not to pollute too much 'perf annotate' debugging sessions.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-7-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:05 +0000 (16:52 -0200)]
perf record: Stop intercepting events, use postprocessing to get build-ids
We want to stream events as fast as possible to perf.data, and
also in the future we want to have splice working, when no
interception will be possible.
Using build_id__mark_dso_hit_ops to create the list of DSOs that
back MMAPs we also optimize disk usage in the build-id cache by
only caching DSOs that had hits.
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:04 +0000 (16:52 -0200)]
perf build-id: Move the routine to find DSOs with hits to the lib
Because 'perf record' will have to find the build-ids in after
we stop recording, so as to reduce even more the impact in the
workload while we do the measurement.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-5-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:03 +0000 (16:52 -0200)]
perf probe: Don't use a perf_session instance just to resolve symbols
With the recent modifications done to untie the session and
symbol layers, 'perf probe' now can use just the symbols layer.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:02 +0000 (16:52 -0200)]
perf symbols: Ditch vdso global variable
We can check using strcmp, most DSOs don't start with '[' so the
test is cheap enough and we had to test it there anyway since
when reading perf.data files we weren't calling the routine that
created this global variable and thus weren't setting it as
"loaded", which was causing a bogus:
Failed to open [vdso], continuing without symbols
Message as the first line of 'perf report'.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-3-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:01 +0000 (16:52 -0200)]
perf symbols: Fixup vsyscall maps
While debugging a problem reported by Pekka Enberg by printing
the IP and all the maps for a thread when we don't find a map
for an IP I noticed that dso__load_sym needs to fixup these
extra maps it creates to hold symbols in different ELF sections
than the main kernel one.
Now we're back showing things like:
[root@doppio linux-2.6-tip]# perf report | grep vsyscall
0.02% mutt [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.01% named [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.01% NetworkManager [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.01% gconfd-2 [kernel.kallsyms].vsyscall_0 [.] vgettimeofday
0.01% hald-addon-rfki [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.00% dbus-daemon [kernel.kallsyms].vsyscall_fn [.] vread_hpet
[root@doppio linux-2.6-tip]#
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 3 Feb 2010 18:52:00 +0000 (16:52 -0200)]
perf symbols: Remove perf_session usage in symbols layer
I noticed while writing the first test in 'perf regtest' that to
just test the symbol handling routines one needs to create a
perf session, that is a layer centered on a perf.data file,
events, etc, so I untied these layers.
This reduces the complexity for the users as the number of
parameters to most of the symbols and session APIs now was
reduced while not adding more state to all the map instances by
only having data that is needed to split the kernel (kallsyms
and ELF symtab sections) maps and do vmlinux relocation on the
main kernel map.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1265223128-11786-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Xiao Guangrong [Wed, 3 Feb 2010 03:53:14 +0000 (11:53 +0800)]
perf tools: Use O_LARGEFILE to open perf data file
Open perf data file with O_LARGEFILE flag since its size is
easily larger that 2G.
For example:
# rm -rf perf.data
# ./perf kmem record sleep 300
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 3142.147 MB perf.data
(~
137282513 samples) ]
# ll -h perf.data
-rw------- 1 root root 3.1G .....
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
4B68F32A.
9040203@cn.fujitsu.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Ingo Molnar [Sun, 31 Jan 2010 07:27:58 +0000 (08:27 +0100)]
perf lock: Clean up various details
Fix up a few small stylistic details:
- use consistent vertical spacing/alignment
- remove line80 artifacts
- group some global variables better
- remove dead code
Plus rename 'prof' to 'report' to make it more in line with other
tools, and remove the line/file keying as we really want to use
IPs like the other tools do.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264851813-8413-12-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Hitoshi Mitake [Sat, 30 Jan 2010 11:43:33 +0000 (20:43 +0900)]
perf lock: Introduce new tool "perf lock", for analyzing lock statistics
Adding new subcommand "perf lock" to perf.
I have a lot of remaining ToDos, but for now perf lock can
already provide minimal functionality for analyzing lock
statistics.
Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264851813-8413-12-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Hitoshi Mitake [Sat, 30 Jan 2010 11:43:32 +0000 (20:43 +0900)]
perf lock: Enhance information of lock trace events
Add wait time and lock identification details.
Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264851813-8413-11-git-send-email-mitake@dcl.info.waseda.ac.jp>
[ removed the file/line bits as we can do that better via IPs ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Hitoshi Mitake [Sat, 30 Jan 2010 11:43:24 +0000 (20:43 +0900)]
perf: Add util/include/linuxhash.h to include hash.h of kernel
linux/hash.h, hash header of kernel, is also useful for perf.
util/include/linuxhash.h includes linux/hash.h, so we can use
hash facilities (e.g. hash_long()) in perf now.
Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264851813-8413-3-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Hitoshi Mitake [Sat, 30 Jan 2010 11:43:23 +0000 (20:43 +0900)]
perf tools: Add __data_loc support
This patch is required to test the next patch for perf lock.
At
064739bc4b3d7f424b2f25547e6611bcf0132415 ,
support for the modifier "__data_loc" of format is added.
But, when I wanted to parse format of lock_acquired (or some
event else), raw_field_ptr() did not returned correct pointer.
So I modified raw_field_ptr() like this patch. Then
raw_field_ptr() works well.
Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Tom Zanussi <tzanussi@gmail.com>
Cc: Steven Rostedt <srostedt@redhat.com>
LKML-Reference: <
1264851813-8413-2-git-send-email-mitake@dcl.info.waseda.ac.jp>
[ v3: fixed minor stylistic detail ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Hitoshi Mitake [Sat, 30 Jan 2010 11:55:41 +0000 (20:55 +0900)]
Revert "perf record: Intercept all events"
This reverts commit
f5a2c3dce03621b55f84496f58adc2d1a87ca16f.
This patch is required for making "perf lock rec" work.
The commit
f5a2c3dce0 changes write_event() of builtin-record.c
. And changed write_event() sometimes doesn't stop with perf
lock rec.
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
[ that commit also causes perf record to not be Ctrl-C-able,
and it's concetually wrong to parse the data at record time
(unconditionally - even when not needed), as we eventually
want to be able to do zero-copy recording, at least for
non-archive recordings. ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
John Kacur [Wed, 27 Jan 2010 23:05:54 +0000 (21:05 -0200)]
perf: Ignore perf-archive temp file
Tell git to ignore perf-archive.
Signed-off-by: John Kacur <jkacur@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264633557-17597-6-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Thiago Farina [Wed, 27 Jan 2010 23:05:55 +0000 (21:05 -0200)]
tools/perf/perf.c: Clean up trivial style issues
Checked with:
./../scripts/checkpatch.pl --terse --file perf.c
perf.c: 51: ERROR: open brace '{' following function declarations go on the next line
perf.c: 73: ERROR: "foo*** bar" should be "foo ***bar"
perf.c:112: ERROR: space prohibited before that close parenthesis ')'
perf.c:127: ERROR: space prohibited before that close parenthesis ')'
perf.c:171: ERROR: "foo** bar" should be "foo **bar"
perf.c:213: ERROR: "(foo*)" should be "(foo *)"
perf.c:216: ERROR: "(foo*)" should be "(foo *)"
perf.c:217: ERROR: space required before that '*' (ctx:OxV)
perf.c:452: ERROR: do not initialise statics to 0 or NULL
perf.c:453: ERROR: do not initialise statics to 0 or NULL
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
LKML-Reference: <
1264633557-17597-7-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Ingo Molnar [Fri, 29 Jan 2010 08:24:57 +0000 (09:24 +0100)]
Merge branch 'perf/urgent' into perf/core
Merge reason: We want to queue up a dependent patch. Also update to
later -rc's.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 27 Jan 2010 23:05:52 +0000 (21:05 -0200)]
perf session: Create kernel maps in the constructor
Removing one extra step needed in the tools that need this,
fixing a bug in 'perf probe' where this was not being done.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264633557-17597-4-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 27 Jan 2010 23:05:51 +0000 (21:05 -0200)]
perf symbols: Split helpers used when creating kernel dso object
To make it clear and allow for direct usage by, for instance,
regression test suites.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264633557-17597-3-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 27 Jan 2010 23:05:50 +0000 (21:05 -0200)]
perf symbols: Factor out dso__load_vmlinux_path()
So that we can call it directly from regression tests, and also
to reduce the size of dso__load_kernel_sym(), making it more
clear.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264633557-17597-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Wed, 27 Jan 2010 23:05:49 +0000 (21:05 -0200)]
perf top: Exit if specified --vmlinux can't be used
As we do lazy loading of symtabs we only will know if the
specified vmlinux file is invalid when we actually have a hit in
kernel space and then try to load it. So if we get kernel hits
and there are _no_ symbols in the DSO backing the kernel map,
bail out.
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264633557-17597-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 29 Jan 2010 08:04:26 +0000 (09:04 +0100)]
perf_events: Fix sample_period transfer on inherit
One problem with frequency driven counters is that we cannot
predict the rate at which they trigger, therefore we have to
start them at period=1, this causes a ramp up effect. However,
if we fail to propagate the stable state on fork each new child
will have to ramp up again. This can lead to significant
artifacts in sample data.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: eranian@google.com
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264752266.4283.2121.camel@laptop>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Wed, 27 Jan 2010 22:07:49 +0000 (23:07 +0100)]
perf_events, x86: Remove spurious counter reset from x86_pmu_enable()
At enable time the counter might still have a ->idx pointing to
a previously occupied location that might now be taken by
another event. Resetting the counter at that location with data
from this event will destroy the other counter's count.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100127221122.
261477183@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Wed, 27 Jan 2010 22:07:48 +0000 (23:07 +0100)]
perf_events, x86: Implement Intel Westmere support
The new Intel documentation includes Westmere arch specific
event maps that are significantly different from the Nehalem
ones. Add support for this generation.
Found the CPUID model numbers on wikipedia.
Also ammend some Nehalem constraints, spotted those when looking
for the differences between Nehalem and Westmere.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100127221122.
151865645@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Wed, 27 Jan 2010 22:07:47 +0000 (23:07 +0100)]
perf_events, x86: Clean up hw_perf_*_all() implementation
Put the recursion avoidance code in the generic hook instead of
replicating it in each implementation.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100127221122.
057507285@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Wed, 27 Jan 2010 22:07:46 +0000 (23:07 +0100)]
perf_events, x86: Fix event constraint masks
Since constraints are specified on the event number, not number
and unit mask shorten the constraint masks so that we'll
actually match something.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100127221121.
967610372@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Mon, 25 Jan 2010 14:58:43 +0000 (15:58 +0100)]
perf_event: x86: Deduplicate the disable code
Share the meat of the x86_pmu_disable() code with hw_perf_enable().
Also remove the barrier() from that code, since I could not convince
myself we actually need it.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Ingo Molnar [Wed, 27 Jan 2010 07:39:39 +0000 (08:39 +0100)]
perf, x86: Clean up event constraints code a bit
- Remove stray debug code
- Improve ugly macros a bit
- Remove some whitespace damage
- (Also fix up some accumulated damage in perf_event.h)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Stephane Eranian <eranian@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Peter Zijlstra [Mon, 25 Jan 2010 10:57:25 +0000 (11:57 +0100)]
perf_event: x86: Optimize x86_pmu_disable()
x86_pmu_disable() removes the event from the cpuc->event_list[], however
since an event can only be on that list once, stop looking after we found
it.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 15:40:12 +0000 (16:40 +0100)]
perf_event: x86: Optimize the fast path a little more
Remove num from the fast path and save a few ops.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155536.
056430539@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 15:32:17 +0000 (16:32 +0100)]
perf_event: x86: Optimize constraint weight computation
Add a weight member to the constraint structure and avoid recomputing the
weight at runtime.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155535.
963944926@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 15:32:17 +0000 (16:32 +0100)]
perf_event: x86: Optimize the constraint searching bits
Instead of copying bitmasks around, pass pointers to the constraint
structure.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155535.
887853503@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 14:59:29 +0000 (15:59 +0100)]
bitops: Provide compile time HWEIGHT{8,16,32,64}
Provide compile time versions of hweight.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <
20100122155535.
797688466@chello.nl>
[ Remove some whitespace damage while we are at it ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 14:38:26 +0000 (15:38 +0100)]
perf_event: x86: Reduce some overly long lines with some MACROs
Introduce INTEL_EVENT_CONSTRAINT and FIXED_EVENT_CONSTRAINT to reduce
some line length and typing work.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155535.
688730371@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 14:25:59 +0000 (15:25 +0100)]
perf_event: x86: Clean up some of the u64/long bitmask casting
We need this to be u64 for direct assigment, but the bitmask functions
all work on unsigned long, leading to cast heaven, solve this by using a
union.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155535.
595961269@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 13:55:22 +0000 (14:55 +0100)]
perf_event: x86: Fixup constraints typing issue
Constraints gets defined an u64 but in long quantities and then cast to
long.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155535.
504916780@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Fri, 22 Jan 2010 13:35:46 +0000 (14:35 +0100)]
perf_event: x86: Allocate the fake_cpuc
GCC was complaining the stack usage was too large, so allocate the
structure.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
LKML-Reference: <
20100122155535.
411197266@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Stephane Eranian [Thu, 21 Jan 2010 15:39:01 +0000 (17:39 +0200)]
perf_events: Add fast-path to the rescheduling code
Implement correct fastpath scheduling, i.e., reuse previous assignment.
Signed-off-by: Stephane Eranian <eranian@google.com>
[ split from larger patch]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <
4b588464.
1818d00a.4456.383b@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Stephane Eranian [Mon, 18 Jan 2010 08:58:01 +0000 (10:58 +0200)]
perf_events, x86: Improve x86 event scheduling
This patch improves event scheduling by maximizing the use of PMU
registers regardless of the order in which events are created in a group.
The algorithm takes into account the list of counter constraints for each
event. It assigns events to counters from the most constrained, i.e.,
works on only one counter, to the least constrained, i.e., works on any
counter.
Intel Fixed counter events and the BTS special event are also handled via
this algorithm which is designed to be fairly generic.
The patch also updates the validation of an event to use the scheduling
algorithm. This will cause early failure in perf_event_open().
The 2nd version of this patch follows the model used by PPC, by running
the scheduling algorithm and the actual assignment separately. Actual
assignment takes place in hw_perf_enable() whereas scheduling is
implemented in hw_perf_group_sched_in() and x86_pmu_enable().
Signed-off-by: Stephane Eranian <eranian@google.com>
[ fixup whitespace and style nits as well as adding is_x86_event() ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
4b5430c6.
0f975e0a.1bf9.
ffff85fe@mx.google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
K.Prasad [Thu, 28 Jan 2010 11:14:15 +0000 (16:44 +0530)]
x86/hw-breakpoints: Optimize return code from notifier chain in hw_breakpoint_handler
Processing of debug exceptions in do_debug() can stop if it
originated from a hw-breakpoint exception by returning NOTIFY_STOP
in most cases.
But for certain cases such as:
a) user-space breakpoints with pending SIGTRAP signal delivery (as
in the case of ptrace induced breakpoints).
b) exceptions due to other causes than breakpoints
We will continue to process the exception by returning NOTIFY_DONE.
Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Roland McGrath <roland@redhat.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Jan Kiszka <jan.kiszka@siemens.com>
LKML-Reference: <
20100128111415.GC13935@in.ibm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
K.Prasad [Thu, 28 Jan 2010 11:14:01 +0000 (16:44 +0530)]
x86/debug: Clear reserved bits of DR6 in do_debug()
Clear the reserved bits from the stored copy of debug status
register (DR6).
This will help easy bitwise operations such as quick testing
of a debug event origin.
Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com>
Cc: Roland McGrath <roland@redhat.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Ingo Molnar <mingo@elte.hu>
LKML-Reference: <
20100128111401.GB13935@in.ibm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Xiao Guangrong [Thu, 28 Jan 2010 01:34:27 +0000 (09:34 +0800)]
tracing/kprobe: Cleanup unused return value of tracing functions
The return values of the kprobe's tracing functions are meaningless,
lets remove these.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
4B60E9A3.
2040505@cn.fujitsu.com>
[fweisbec@gmail: whitespace fixes, drop useless void returns in end
of functions]
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Xiao Guangrong [Thu, 28 Jan 2010 01:32:29 +0000 (09:32 +0800)]
perf: Factorize trace events raw sample buffer operations
Introduce ftrace_perf_buf_prepare() and ftrace_perf_buf_submit() to
gather the common code that operates on raw events sampling buffer.
This cleans up redundant code between regular trace events, syscall
events and kprobe events.
Changelog v1->v2:
- Rename function name as per Masami and Frederic's suggestion
- Add __kprobes for ftrace_perf_buf_prepare() and make
ftrace_perf_buf_submit() inline as per Masami's suggestion
- Export ftrace_perf_buf_prepare since modules will use it
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Acked-by: Masami Hiramatsu <mhiramat@redhat.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <
4B60E92D.
9000808@cn.fujitsu.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Anton Blanchard [Mon, 18 Jan 2010 05:47:07 +0000 (16:47 +1100)]
perf: Fix inconsistency between IP and callchain sampling
When running perf across all cpus with backtracing (-a -g), sometimes we
get samples without associated backtraces:
23.44% init [kernel] [k] restore
11.46% init eeba0c [k] 0x00000000eeba0c
6.77% swapper [kernel] [k] .perf_ctx_adjust_freq
5.73% init [kernel] [k] .__trace_hcall_entry
4.69% perf libc-2.9.so [.] 0x0000000006bb8c
|
|--11.11%-- 0xfffa941bbbc
It turns out the backtrace code has a check for the idle task and the IP
sampling does not. This creates problems when profiling an interrupt
heavy workload (in my case 10Gbit ethernet) since we get no backtraces
for interrupts received while idle (ie most of the workload).
Right now x86 and sh check that current is not NULL, which should never
happen so remove that too.
Idle task's exclusion must be performed from the core code, on top
of perf_event_attr:exclude_idle.
Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
LKML-Reference: <
20100118054707.GT12666@kryten>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Mahesh Salgaonkar [Thu, 21 Jan 2010 12:55:16 +0000 (18:25 +0530)]
hw_breakpoints: Release the bp slot if arch_validate_hwbkpt_settings() fails.
On a given architecture, when hardware breakpoint registration fails
due to un-supported access type (read/write/execute), we lose the bp
slot since register_perf_hw_breakpoint() does not release the bp slot
on failure.
Hence, any subsequent hardware breakpoint registration starts failing
with 'no space left on device' error.
This patch introduces error handling in register_perf_hw_breakpoint()
function and releases bp slot on error.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: K. Prasad <prasad@linux.vnet.ibm.com>
Cc: Maneesh Soni <maneesh@in.ibm.com>
LKML-Reference: <
20100121125516.GA32521@in.ibm.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Hitoshi Mitake [Fri, 22 Jan 2010 13:45:29 +0000 (22:45 +0900)]
perf trace: Add -i option for choosing input file
perf trace lacks -i option for choosing input file.
This patch adds it to perf trace.
Signed-off-by: Hitoshi Mitake <mitake@dcl.info.waseda.ac.jp>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
1264167929-6741-1-git-send-email-mitake@dcl.info.waseda.ac.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Fri, 22 Jan 2010 16:35:02 +0000 (14:35 -0200)]
perf symbols: Use the right variable to check for kallsyms in the cache
Probably this wasn't noticed when testing this on my parisc
machine because I must have copied manually to its cache the
vmlinux file used in the x86_64 machine, now that I tried
looking on a x86-32 machine with a fresh cache, kernel symbols
weren't being resolved even with the right kallsyms copy on its
cache, duh.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264178102-4203-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Fri, 22 Jan 2010 16:35:01 +0000 (14:35 -0200)]
perf symbols: Fix inverted logic for showing kallsyms as the source of symbols
Only if we parsed /proc/kallsyms (or a copy found in the buildid
cache) we should set the dso long name to "[kernel.kallsyms]".
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264178102-4203-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Thu, 21 Jan 2010 15:04:44 +0000 (13:04 -0200)]
perf top: Handle PERF_RECORD_{FORK,EXIT} events
As noticed by Mike, symbols in new tasks were not being
processed as we weren't processing these events.
Reported-by: Mike Galbraith <efault@gmx.de>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264086284-1431-2-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Arnaldo Carvalho de Melo [Thu, 21 Jan 2010 15:04:43 +0000 (13:04 -0200)]
perf top: Fix sample counting
Broken since "
5b2bb75 perf top: Support userspace symbols too".
Reported-by: Mike Galbraith <efault@gmx.de>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <
1264086284-1431-1-git-send-email-acme@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Amerigo Wang [Mon, 25 Jan 2010 05:07:30 +0000 (00:07 -0500)]
perf: Ignore perf.data.old
Tell git to ignore this file.
Signed-off-by: WANG Cong <amwang@redhat.com>
LKML-Reference: <
20100125051052.3999.28082.sendpatchset@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Yong Wang [Fri, 22 Jan 2010 01:47:50 +0000 (09:47 +0800)]
perf report: Fix segmentation fault when running with '-g none'
Segmentation fault occurs when running perf report with '-g
none'.
Reported-by: Austin Zhang <austin.zhang@intel.com>
Signed-off-by: Yong Wang <yong.y.wang@intel.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <
20100122014750.GA4111@ywang-moblin2.bj.intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Peter Zijlstra [Tue, 26 Jan 2010 17:50:16 +0000 (18:50 +0100)]
perf: Reimplement frequency driven sampling
There was a bug in the old period code that caused intel_pmu_enable_all()
or native_write_msr_safe() to show up quite high in the profiles.
In staring at that code it made my head hurt, so I rewrote it in a
hopefully simpler fashion. Its now fully symetric between tick and
overflow driven adjustments and uses less data to boot.
The only complication is that it basically wants to do a u128 division.
The code approximates that in a rather simple truncate until it fits
fashion, taking care to balance the terms while truncating.
This version does not generate that sampling artefact.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Greg Kroah-Hartman [Tue, 26 Jan 2010 23:04:02 +0000 (15:04 -0800)]
fnctl: f_modown should call write_lock_irqsave/restore
Commit
703625118069f9f8960d356676662d3db5a9d116 exposed that f_modown()
should call write_lock_irqsave instead of just write_lock_irq so that
because a caller could have a spinlock held and it would not be good to
renable interrupts.
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Tavis Ormandy <taviso@google.com>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Tue, 26 Jan 2010 03:05:06 +0000 (19:05 -0800)]
Merge branch 'for_linus' of git://git./linux/kernel/git/tytso/ext4
* 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
ext4: Drop EXT4_GET_BLOCKS_UPDATE_RESERVE_SPACE flag
ext4: Fix quota accounting error with fallocate
ext4: Handle -EDQUOT error on write
Linus Torvalds [Tue, 26 Jan 2010 03:03:58 +0000 (19:03 -0800)]
Merge git://git./linux/kernel/git/wim/linux-2.6-watchdog
* git://git.kernel.org/pub/scm/linux/kernel/git/wim/linux-2.6-watchdog:
[WATCHDOG] sbc_fitpc2_wdt: fix I/O space access technique.
[WATCHDOG] ixp2000: Fix build failure caused by missing include