GitHub/LineageOS/android_kernel_motorola_exynos9610.git
15 years agoperf_counter tools: Fix vmlinux symbol generation breakage
Mike Galbraith [Mon, 20 Jul 2009 12:01:38 +0000 (14:01 +0200)]
perf_counter tools: Fix vmlinux symbol generation breakage

vmlinux meets the criteria for symbol adjustment, which breaks vmlinux generated symbols.
Fix this by exempting vmlinux.  This is a bit fragile in that someone could change the
kernel dso's name, but currently that name is also hardwired.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1248091298.18702.18.camel@marge.simson.net>

15 years agoperf_counter: Detect debugfs location
Jason Baron [Tue, 21 Jul 2009 18:16:29 +0000 (14:16 -0400)]
perf_counter: Detect debugfs location

If "/sys/kernel/debug" is not a debugfs mount point, search for the debugfs
filesystem in /proc/mounts, but also allows the user to specify
'--debugfs-dir=blah' or set the environment variable: 'PERF_DEBUGFS_DIR'

Signed-off-by: Jason Baron <jbaron@redhat.com>
[ also made it probe "/debug" by default ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090721181629.GA3094@redhat.com>

15 years agoperf_counter: Add tracepoint support to perf list, perf stat
Jason Baron [Tue, 21 Jul 2009 16:20:22 +0000 (12:20 -0400)]
perf_counter: Add tracepoint support to perf list, perf stat

Add support to 'perf list' and 'perf stat' for kernel tracepoints. The
implementation creates a 'for_each_subsystem' and 'for_each_event' for
easy iteration over the tracepoints.

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <426129bf9fcc8ee63bb094cf736e7316a7dcd77a.1248190728.git.jbaron@redhat.com>

15 years agoperf symbol: C++ demangling
Arnaldo Carvalho de Melo [Mon, 20 Jul 2009 17:14:12 +0000 (14:14 -0300)]
perf symbol: C++ demangling

[acme@doppio ~]$ perf report -s comm,dso,symbol -C firefox -d /usr/lib64/xulrunner-1.9.1/libxul.so | grep :: | head
     2.21%  [.] nsDeque::Push(void*)
     1.78%  [.] GraphWalker::DoWalk(nsDeque&)
     1.30%  [.] GCGraphBuilder::AddNode(void*, nsCycleCollectionParticipant*)
     1.27%  [.] XPCWrappedNative::CallMethod(XPCCallContext&, XPCWrappedNative::CallMode)
     1.18%  [.] imgContainer::DrawFrameTo(gfxIImageFrame*, gfxIImageFrame*, nsRect&)
     1.13%  [.] nsDeque::PopFront()
     1.11%  [.] nsGlobalWindow::RunTimeout(nsTimeout*)
     0.97%  [.] nsXPConnect::Traverse(void*, nsCycleCollectionTraversalCallback&)
     0.95%  [.] nsJSEventListener::cycleCollection::Traverse(void*, nsCycleCollectionTraversalCallback&)
     0.95%  [.] nsCOMPtr_base::~nsCOMPtr_base()
[acme@doppio ~]$

Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Vegard Nossum <vegard.nossum@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Suggested-by: Clark Williams <williams@redhat.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090720171412.GB10410@ghostprotocols.net>

15 years agoperf: avoid structure size confusion by using a fixed size
Arjan van de Ven [Tue, 21 Jul 2009 05:54:26 +0000 (22:54 -0700)]
perf: avoid structure size confusion by using a fixed size

for some reason, this structure gets compiled as 36 bytes in some files
(the ones that alloacte it) but 40 bytes in others (the ones that use it).
The cause is an off_t type that gets a different size in different
compilation units for some yet-to-be-explained reason.

But the effect is disasterous; the size/offset members of the struct
are at different offsets, and result in mostly complete garbage.
The parser in perf is so robust that this all gets hidden, and after
skipping an certain amount of samples, it recovers.... so this bug
is not normally noticed.

.... except when you want every sample to be exact.

Fix this by just using an explicitly sized type.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <4A655917.9080504@linux.intel.com>

15 years agoperf_counter: Fix throttle/unthrottle event logging
Anton Blanchard [Wed, 22 Jul 2009 13:05:46 +0000 (23:05 +1000)]
perf_counter: Fix throttle/unthrottle event logging

Right now we only print PERF_EVENT_THROTTLE + 1 (ie PERF_EVENT_UNTHROTTLE).
Fix this to print both a throttle and unthrottle event.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090722130546.GE9029@kryten>

15 years agoperf_counter: Improve perf stat and perf record option parsing
Anton Blanchard [Wed, 22 Jul 2009 13:04:12 +0000 (23:04 +1000)]
perf_counter: Improve perf stat and perf record option parsing

perf stat and perf record currently look for all options on the command
line. This can lead to some confusion:

# perf stat ls -l
  Error: unknown switch `l'

While we can work around this by adding '--' before the command, the git
option parsing code can stop at the first non option:

# perf stat ls -l
 Performance counter stats for 'ls -l':
....

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090722130412.GD9029@kryten>

15 years agoperf_counter: PERF_SAMPLE_ID and inherited counters
Peter Zijlstra [Tue, 21 Jul 2009 11:19:40 +0000 (13:19 +0200)]
perf_counter: PERF_SAMPLE_ID and inherited counters

Anton noted that for inherited counters the counter-id as provided by
PERF_SAMPLE_ID isn't mappable to the id found through PERF_RECORD_ID
because each inherited counter gets its own id.

His suggestion was to always return the parent counter id, since that
is the primary counter id as exposed. However, these inherited
counters have a unique identifier so that events like
PERF_EVENT_PERIOD and PERF_EVENT_THROTTLE can be specific about which
counter gets modified, which is important when trying to normalize the
sample streams.

This patch removes PERF_EVENT_PERIOD in favour of PERF_SAMPLE_PERIOD,
which is more useful anyway, since changing periods became a lot more
common than initially thought -- rendering PERF_EVENT_PERIOD the less
useful solution (also, PERF_SAMPLE_PERIOD reports the more accurate
value, since it reports the value used to trigger the overflow,
whereas PERF_EVENT_PERIOD simply reports the requested period changed,
which might only take effect on the next cycle).

This still leaves us PERF_EVENT_THROTTLE to consider, but since that
_should_ be a rare occurrence, and linking it to a primary id is the
most useful bit to diagnose the problem, we introduce a
PERF_SAMPLE_STREAM_ID, for those few cases where the full
reconstruction is important.

[Does change the ABI a little, but I see no other way out]

Suggested-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1248095846.15751.8781.camel@twins>

15 years agoperf_counter: Plug more stack leaks
Peter Zijlstra [Wed, 22 Jul 2009 09:13:50 +0000 (11:13 +0200)]
perf_counter: Plug more stack leaks

Per example of Arjan's patch, I went through and found a few more.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
15 years agoperf: Fix stack data leak
Arjan van de Ven [Tue, 21 Jul 2009 07:55:05 +0000 (00:55 -0700)]
perf: Fix stack data leak

the "reserved" field was not initialized to zero, resulting in 4 bytes
of stack data leaking to userspace....

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
15 years agoperf_counter: Remove unused variables
Peter Zijlstra [Wed, 22 Jul 2009 14:31:36 +0000 (16:31 +0200)]
perf_counter: Remove unused variables

Fix a gcc unused variables warning.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
15 years agoMerge commit 'tip/perfcounters/core' into perf-counters-for-linus
Peter Zijlstra [Wed, 22 Jul 2009 16:05:48 +0000 (18:05 +0200)]
Merge commit 'tip/perfcounters/core' into perf-counters-for-linus

15 years agoperf_counter: Make call graph option consistent
Anton Blanchard [Thu, 16 Jul 2009 13:44:29 +0000 (15:44 +0200)]
perf_counter: Make call graph option consistent

perf record uses -g for logging call graph data but perf report
uses -c to print call graph data. Be consistent and use -g
everywhere for call graph data.

Also update the help text to reflect the current default -
fractal,0.5

Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090716104817.803604373@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Add perf record option to log addresses
Anton Blanchard [Thu, 16 Jul 2009 13:44:29 +0000 (15:44 +0200)]
perf_counter: Add perf record option to log addresses

Add the -d or --data option to log event addresses (eg page
faults).

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090716104817.697698033@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Log vfork as a fork event
Anton Blanchard [Thu, 16 Jul 2009 13:44:29 +0000 (15:44 +0200)]
perf_counter: Log vfork as a fork event

Right now we don't output vfork events. Even though we should
always see an exec after a vfork, we may get perfcounter
samples between the vfork and exec. These samples can lead to
some confusion when parsing perfcounter data.

To keep things consistent we should always log a fork event. It
will result in a little more log data, but is less confusing to
trace parsing tools.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090716104817.589309391@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Synthesize VDSO mmap event
Anton Blanchard [Thu, 16 Jul 2009 13:44:29 +0000 (15:44 +0200)]
perf_counter: Synthesize VDSO mmap event

perf record synthesizes mmap events for the running process.
Right now it just catches file mappings, but we can check for
the vdso symbol and add that too.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090716104817.517264409@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Make sure we dont leak kernel memory to userspace
Anton Blanchard [Thu, 16 Jul 2009 13:15:52 +0000 (15:15 +0200)]
perf_counter: Make sure we dont leak kernel memory to userspace

There are a few places we are leaking tiny amounts of kernel
memory to userspace. This happens when writing out strings
because we always align the end to 64 bits.

To avoid this we should always use an appropriately sized
temporary buffer and ensure it is zeroed.

Since d_path assembles the string from the end of the buffer
backwards, we need to add 64 bits after the buffer to allow for
alignment.

We also need to copy arch_vma_name to the temporary buffer,
because if we use it directly we may end up copying to
userspace a number of bytes after the end of the string
constant.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20090716104817.273972048@samba.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Fix index boundary check
Roel Kluin [Mon, 13 Jul 2009 00:25:47 +0000 (02:25 +0200)]
perf_counter tools: Fix index boundary check

Keep index within event_type_descriptors[]

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <4A5A7F0B.4070106@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Fix the tracepoint channel to perfcounters
Chris Wilson [Mon, 6 Jul 2009 08:31:33 +0000 (09:31 +0100)]
perf_counter: Fix the tracepoint channel to perfcounters

Fix a missed rename in EVENT_PROFILE support so that it gets
built and allows tracepoint tracing from the 'perf' tool.

Fix a typo in the (never before built & enabled) portion in
perf_counter.c as well, and update that code to the
attr.config changes as well.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Gamari <bgamari.foss@gmail.com>
Cc: Jason Baron <jbaron@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <1246869094-21237-1-git-send-email-chris@chris-wilson.co.uk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter, x86: Extend perf_counter Pentium M support
Daniel Qarras [Sun, 12 Jul 2009 11:32:40 +0000 (04:32 -0700)]
perf_counter, x86: Extend perf_counter Pentium M support

I've attached a patch to remove the Pentium M special casing of
EMON and as noticed at least with my Pentium M the hardware PMU
now works:

 Performance counter stats for '/bin/ls /var/tmp':

       1.809988  task-clock-msecs         #      0.125 CPUs
              1  context-switches         #      0.001 M/sec
              0  CPU-migrations           #  0.000 M/sec
            224  page-faults              #  0.124 M/sec
        1425648  cycles                   #    787.656 M/sec
         912755  instructions             #  0.640 IPC

Vince suggested that this code was trying to address erratum
Y17 in Pentium-M's:

  http://download.intel.com/support/processors/mobile/pm/sb/25266532.pdf

But that erratum (related to IA32_MISC_ENABLES.7) does not
affect perfcounters as we dont use this toggle to disable RDPMC
and WRMSR/RDMSR access to performance counters. We keep cr4's
bit 8 (X86_CR4_PCE) clear so unprivileged RDPMC access is not
allowed anyway.

Cc: Vince Weaver <vince@deater.net>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@googlemail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Introduce -n/--show-nr-samples
Arnaldo Carvalho de Melo [Sat, 11 Jul 2009 15:18:37 +0000 (12:18 -0300)]
perf report: Introduce -n/--show-nr-samples

[acme@doppio pahole]$ perf report -ns comm,dso,symbol -d /lib64/libc-2.10.1.so -C pahole | head -17
    21.94%      32101  [.] _int_malloc
    20.10%      29402  [.] __GI_strcmp
    16.77%      24533  [.] __tsearch
    12.61%      18450  [.] malloc_consolidate
     6.42%       9394  [.] _int_free
     6.28%       9191  [.] __tfind
     4.56%       6678  [.] __GI___libc_free
     4.46%       6520  [.] _IO_vfprintf_internal
     2.59%       3786  [.] __malloc
     1.17%       1716  [.] __GI_memcpy
[acme@doppio pahole]$

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1247325517-12272-5-git-send-email-acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: PLT info is stripped in -debuginfo packages
Arnaldo Carvalho de Melo [Sat, 11 Jul 2009 15:18:36 +0000 (12:18 -0300)]
perf_counter tools: PLT info is stripped in -debuginfo packages

So we need to get the richer .symtab from the debuginfo
packages but the PLT info from the original DSO where we have
just the leaner .dynsym symtab.

Example:

| [acme@doppio pahole]$ perf report --sort comm,dso,symbol > before
| [acme@doppio pahole]$ perf report --sort comm,dso,symbol > after
| [acme@doppio pahole]$ diff -U1 before after
| --- before 2009-07-11 11:04:22.688595741 -0300
| +++ after 2009-07-11 11:04:33.380595676 -0300
| @@ -80,3 +80,2 @@
|       0.07%  pahole ./build/pahole              [.] pahole_stealer
| -     0.06%  pahole /usr/lib64/libdw-0.141.so   [.] 0x00000000007140
|       0.06%  pahole /usr/lib64/libdw-0.141.so   [.] __libdw_getabbrev
| @@ -91,2 +90,3 @@
|       0.06%  pahole [kernel]                    [k] free_hot_cold_page
| +     0.06%  pahole /usr/lib64/libdw-0.141.so   [.] tfind@plt
|       0.05%  pahole ./build/libdwarves.so.1.0.0 [.] ftype__add_parameter
| @@ -242,2 +242,3 @@
|       0.01%  pahole [kernel]                    [k] account_group_user_time
| +     0.01%  pahole /usr/lib64/libdw-0.141.so   [.] strlen@plt
|       0.01%  pahole ./build/pahole              [.] strcmp@plt
| [acme@doppio pahole]$

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1247325517-12272-4-git-send-email-acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Make the output more compact
Arnaldo Carvalho de Melo [Sat, 11 Jul 2009 15:18:35 +0000 (12:18 -0300)]
perf report: Make the output more compact

When we filter by column content we may end up with a column
that has the same value for all the lines. So remove that
column and tell its unique value on the top, as a comment.

Example:

  [acme@doppio pahole]$  perf report --sort comm,dso,symbol -d ./build/libdwarves.so.1.0.0 -C pahole | head -15
  # dso: ./build/libdwarves.so.1.0.0
  # comm: pahole
  # Samples: 58409
  #
  # Overhead  Symbol
  # ........  ......
  #
      20.93%  [.] tag__recode_dwarf_type
      14.94%  [.] namespace__recode_dwarf_types
      10.38%  [.] cu__table_add_tag
       6.69%  [.] __die__process_tag
       5.05%  [.] die__process_function
       4.70%  [.] list__for_all_tags
       3.68%  [.] tag__init
       3.48%  [.] die__create_new_parameter
  [acme@doppio pahole]$

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1247325517-12272-3-git-send-email-acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agostrlist: Introduce strlist__entry and strlist__nr_entries methods
Arnaldo Carvalho de Melo [Sat, 11 Jul 2009 15:18:34 +0000 (12:18 -0300)]
strlist: Introduce strlist__entry and strlist__nr_entries methods

The strlist__entry method allows accessing strlists like an
array, will be used in the 'perf report' to access the first
entry.

We now keep the nr_entries so that we can check if we have just
one entry, will be used in 'perf report' to improve the output
by showing just at the top when we have just, say, one DSO.

While at it use nr_entries to optimize strlist__is_empty by not
using the far more costly rb_first based implementation.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1247325517-12272-2-git-send-email-acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Tidy up reporting of symbols not found
Arnaldo Carvalho de Melo [Sat, 11 Jul 2009 15:18:33 +0000 (12:18 -0300)]
perf report: Tidy up reporting of symbols not found

Always printing the level info about if it is in the kernel,
hypervisor or userspace as that is in the hist_entry.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1247325517-12272-1-git-send-email-acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Adjust column width to the values sampled
Arnaldo Carvalho de Melo [Sat, 11 Jul 2009 01:47:28 +0000 (22:47 -0300)]
perf report: Adjust column width to the values sampled

Auto-adjust column width of perf report output to the
longest occuring string length.

Example:

[acme@doppio pahole]$  perf report --sort comm,dso,symbol | head -13

    12.79%   pahole  /usr/lib64/libdw-0.141.so    [.] __libdw_find_attr
     8.90%   pahole  /lib64/libc-2.10.1.so        [.] _int_malloc
     8.68%   pahole  /usr/lib64/libdw-0.141.so    [.] __libdw_form_val_len
     8.15%   pahole  /lib64/libc-2.10.1.so        [.] __GI_strcmp
     6.80%   pahole  /lib64/libc-2.10.1.so        [.] __tsearch
     5.54%   pahole  ./build/libdwarves.so.1.0.0  [.] tag__recode_dwarf_type
[acme@doppio pahole]$

[acme@doppio pahole]$  perf report --sort comm,dso,symbol -d /lib64/libc-2.10.1.so | head -10

    21.92%   pahole  /lib64/libc-2.10.1.so  [.] _int_malloc
    20.08%   pahole  /lib64/libc-2.10.1.so  [.] __GI_strcmp
    16.75%   pahole  /lib64/libc-2.10.1.so  [.] __tsearch
[acme@doppio pahole]$

Also add these extra options to control the new behaviour:

  -w, --field-width

Force each column width to the provided list, for large terminal
readability.

  -t, --field-separator:

Use a special separator character and don't pad with spaces, replacing
all occurances of this separator in symbol names (and other output) with
a '.' character, that thus it's the only non valid separator.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <20090711014728.GH3452@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Stop open coding unclone_ctx
Peter Zijlstra [Fri, 10 Jul 2009 07:06:56 +0000 (09:06 +0200)]
perf_counter: Stop open coding unclone_ctx

Instead of open coding the unclone context thingy, put it in
a common function.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Clean up global vs counter enable
Peter Zijlstra [Fri, 10 Jul 2009 07:59:56 +0000 (09:59 +0200)]
perf_counter: Clean up global vs counter enable

Ingo noticed that both AMD and P6 call
x86_pmu_disable_counter() on *_pmu_enable_counter(). This is
because we rely on the side effect of that call to program
the event config but not touch the EN bit.

We change that for AMD by having enable_all() simply write
the full config in, and for P6 by explicitly coding it.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Fix up P6 PMU details
Peter Zijlstra [Wed, 8 Jul 2009 08:21:41 +0000 (10:21 +0200)]
perf_counter: Fix up P6 PMU details

The P6 doesn't seem to support cache ref/hit/miss counts, so
we extend the generic hardware event codes to have 0 and -1
mean the same thing as for the generic cache events.

Furthermore, it turns out the 0 event does not count
(that is, its reported that on PPro it actually does count
something), therefore use a event configuration that's
specified not to count to disable the counters.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Add P6 PMU support
Vince Weaver [Wed, 8 Jul 2009 21:46:14 +0000 (17:46 -0400)]
perf_counter: Add P6 PMU support

Add basic P6 PMU support. The P6 uses the EVNTSEL0 EN bit to
enable/disable both its counters. We use this for the
global enable/disable, and clear all config bits (except EN)
to disable individual counters.

Actual ia32 hardware doesn't support lfence, so use a locked
op without side-effect to implement a full barrier.

perf stat and perf record seem to function correctly.

[a.p.zijlstra@chello.nl: cleanups and complete the enable/disable code]

Signed-off-by: Vince Weaver <vince@deater.net>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <Pine.LNX.4.64.0907081718450.2715@pianoman.cluster.toy>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Rename cache events to remove $
Anton Blanchard [Mon, 6 Jul 2009 12:01:31 +0000 (22:01 +1000)]
perf_counter tools: Rename cache events to remove $

The cache events contain '$' which will hit shell variable
expansion. To avoid confusion change this to 'cache', ie
L1-d$-loads becomes L1-dcache-loads.

Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: Roland Dreier <rdreier@cisco.com>
Cc: Jaswinder Singh Rajput <jaswinder@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <20090706120131.GB4391@kryten>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Add "Fractal" mode output - support callchains with relative overhead...
Frederic Weisbecker [Sun, 5 Jul 2009 05:39:21 +0000 (07:39 +0200)]
perf report: Add "Fractal" mode output - support callchains with relative overhead rate

The current callchain displays the overhead rates as absolute:
relative to the total overhead.

This patch provides relative overhead percentage, in which each
branch of the callchain tree is a independant instrumentated object.

This provides a 'fractal' view of the call-chain profile: each
sub-graph looks like a profile in itself - relative to its parent.

You can produce such output by using the "fractal" mode
that you can abbreviate via f, fr, fra, frac, etc...

./perf report -s sym -c fractal

Example:

     8.46%  [k] copy_user_generic_string
                |
                |--52.01%-- generic_file_aio_read
                |          do_sync_read
                |          vfs_read
                |          |
                |          |--97.20%-- sys_pread64
                |          |          system_call_fastpath
                |          |          pread64
                |          |
                |           --2.81%-- sys_read
                |                     system_call_fastpath
                |                     __read
                |
                |--39.85%-- generic_file_buffered_write
                |          __generic_file_aio_write_nolock
                |          generic_file_aio_write
                |          do_sync_write
                |          reiserfs_file_write
                |          vfs_write
                |          |
                |          |--97.05%-- sys_pwrite64
                |          |          system_call_fastpath
                |          |          __pwrite64
                |          |
                |           --2.95%-- sys_write
                |                     system_call_fastpath
                |                     __write_nocancel
[...]

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246772361-9960-5-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: callchains: Manage the cumul hits on the fly
Frederic Weisbecker [Sun, 5 Jul 2009 05:39:20 +0000 (07:39 +0200)]
perf_counter tools: callchains: Manage the cumul hits on the fly

The cumul hits are the number of hits of every childs of a node
plus the hits of the current nodes, required for percentage
computing of a branch.

Theses numbers are calculated during the sorting of the branches of
the callchain tree using a depth first postfix traversal, so that
cumulative hits are propagated in the right order.

But if we plan to implement percentages relative to the parent and not
absolute percentages (relative to the whole overhead), we need to know
the cumulative hits of the parent before computing the children
because the relative minimum acceptable number of entries (ie: minimum
rate against the cumulative hits from the parent) is the basis to
filter the children against a given rate.

Then we need to handle the cumul hits on the fly to prepare the
implementation of relative overhead rates.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246772361-9960-4-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Change default callchain parameters
Frederic Weisbecker [Sun, 5 Jul 2009 05:39:19 +0000 (07:39 +0200)]
perf report: Change default callchain parameters

The default callchain parameters are set to use the flat mode and never
filter any overhead threshold of backtrace.

But flat mode is boring compared to graph mode.
Also the number of callchains may be very high if none is
filtered.

Let's change this to set the graph view and a minimum overhead of 0.5%
as default parameters.

Reported-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246772361-9960-3-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Use a modifiable string for default callchain options
Frederic Weisbecker [Sun, 5 Jul 2009 05:39:18 +0000 (07:39 +0200)]
perf report: Use a modifiable string for default callchain options

If the user doesn't provide options to tune his callchain output
(ie: if he uses -c without arguments) then the default value passed
in the OPT_CALLBACK_DEFAULT() macro is used.

But it's parsed later by strtok() which will replace comma separators
to a zero. This may segfault as we are using a read-only string.

Use a modifiable one instead, and also fix the "100%" default
minimum threshold value by turning it into a 0 (output every callchains)
as it was intended in the origin.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246772361-9960-2-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Warn on callchain output request from non-callchain file
Frederic Weisbecker [Sun, 5 Jul 2009 05:39:17 +0000 (07:39 +0200)]
perf report: Warn on callchain output request from non-callchain file

perf report segfaults while trying to handle callchains from a non
callchain data file.

Instead of a segfault, print a useful message to the user.

Reported-by: Jens Axboe <jens.axboe@oracle.com>
Reported-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246772361-9960-1-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Inline atomic64_read() again
Eric Dumazet [Fri, 3 Jul 2009 14:50:10 +0000 (16:50 +0200)]
x86: atomic64: Inline atomic64_read() again

Now atomic64_read() is light weight (no register pressure and
small icache), we can inline it again.

Also use "=&A" constraint instead of "+A" to avoid warning
about unitialized 'res' variable. (gcc had to force 0 in eax/edx)

  $ size vmlinux.prev vmlinux.after
     text    data     bss     dec     hex filename
  4908667  451676 1684868 7045211  6b805b vmlinux.prev
  4908651  451676 1684868 7045195  6b804b vmlinux.after

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <4A4E1AA2.30002@gmail.com>
[ Also fix typo in atomic64_set() export ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Clean up atomic64_sub_and_test() and atomic64_add_negative()
Ingo Molnar [Fri, 3 Jul 2009 18:11:30 +0000 (20:11 +0200)]
x86: atomic64: Clean up atomic64_sub_and_test() and atomic64_add_negative()

Linus noticed that the variable name 'old_val' is
confusingly named in these functions - the correct
naming is 'new_val'.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907030942260.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Improve atomic64_xchg()
Ingo Molnar [Fri, 3 Jul 2009 17:56:36 +0000 (19:56 +0200)]
x86: atomic64: Improve atomic64_xchg()

Remove the read-first logic from atomic64_xchg() and simplify
the loop.

This function was the last user of __atomic64_read() - remove it.

Also, change the 'real_val' assumption from the somewhat quirky
1ULL << 32 value to the (just as arbitrary, but simpler) value
of 0.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <tip-05118ab8859492ac9ddda0154cf90e37b0a4a0b0@git.kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Export APIs to modules
Ingo Molnar [Fri, 3 Jul 2009 15:28:57 +0000 (17:28 +0200)]
x86: atomic64: Export APIs to modules

atomic64_t primitives are used by a handful of drivers,
so export the APIs consistently. These were inlined
before.

Also mark atomic64_32.o a core object, so that the symbols
are available even if not linked to core kernel pieces.

Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <tip-05118ab8859492ac9ddda0154cf90e37b0a4a0b0@git.kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Improve atomic64_read()
Eric Dumazet [Fri, 3 Jul 2009 11:23:02 +0000 (13:23 +0200)]
x86: atomic64: Improve atomic64_read()

Optimize atomic64_read() as a special open-coded
cmpxchg8b variant. This generates nicer code:

arch/x86/lib/atomic64_32.o:

   text    data     bss     dec     hex filename
    435       0       0     435     1b3 atomic64_32.o.before
    431       0       0     431     1af atomic64_32.o.after

md5:
   bd8ab95e69c93518578bfaf0ea3be4d9  atomic64_32.o.before.asm
   2bdfd4bd1f6b7b61b7fc127aef90ce3b  atomic64_32.o.after.asm

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Code atomic(64)_read and atomic(64)_set in C not CPP
Paul Mackerras [Wed, 1 Jul 2009 22:57:12 +0000 (08:57 +1000)]
x86: atomic64: Code atomic(64)_read and atomic(64)_set in C not CPP

Occasionally we get bugs where atomic_read or atomic_set are
used on atomic64_t variables or vice versa.  These bugs don't
generate warnings on x86 because atomic_read and atomic_set are
coded as macros rather than C functions, so we don't get any
type-checking on their arguments; similarly for atomic64_read
and atomic64_set in 64-bit kernels.

This converts them to C functions so that the arguments are
type-checked and bugs like this will get caught more easily. It
also converts atomic_cmpxchg and atomic_xchg, and
atomic64_cmpxchg and atomic64_xchg on 64-bit, so we get
type-checking on their arguments too.

Compiling a typical 64-bit x86 config, this generates no new
warnings, and the vmlinux text is 86 bytes smaller.

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Fix unclean type use in atomic64_xchg()
Ingo Molnar [Fri, 3 Jul 2009 11:02:39 +0000 (13:02 +0200)]
x86: atomic64: Fix unclean type use in atomic64_xchg()

Linus noticed that atomic64_xchg() uses atomic_read(), which
happens to work because atomic_read() is a macro so the
.counter value gets u64-read on 32-bit too - but this is really
bogus and serious bugs are waiting to happen.

Fix atomic64_xchg() to use __atomic64_read() instead.

No code changed:

arch/x86/lib/atomic64_32.o:

   text    data     bss     dec     hex filename
    435       0       0     435     1b3 atomic64_32.o.before
    435       0       0     435     1b3 atomic64_32.o.after

md5:
   bd8ab95e69c93518578bfaf0ea3be4d9  atomic64_32.o.before.asm
   bd8ab95e69c93518578bfaf0ea3be4d9  atomic64_32.o.after.asm

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Make atomic_read() type-safe
Ingo Molnar [Fri, 3 Jul 2009 11:06:01 +0000 (13:06 +0200)]
x86: atomic64: Make atomic_read() type-safe

Linus noticed that atomic64_xchg() uses atomic_read(), which
happens to work because atomic_read() is a macro so the
.counter value gets u64-read on 32-bit too - but this is really
bogus and serious bugs are waiting to happen.

Change atomic_read() to be a type-safe inline, and this exposes
the atomic64 bogosity as well:

  arch/x86/lib/atomic64_32.c: In function ‘atomic64_xchg’:
  arch/x86/lib/atomic64_32.c:39: warning: passing argument 1 of ‘atomic_read’ from incompatible pointer type

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Reduce size of functions
Ingo Molnar [Fri, 3 Jul 2009 10:51:19 +0000 (12:51 +0200)]
x86: atomic64: Reduce size of functions

cmpxchg8b is a huge instruction in terms of register footprint,
we almost never want to inline it, not even within the same
code module.

GCC 4.3 still messes up for two functions, under-judging the
true cost of this instruction - so annotate two key functions
to reduce the bloat:

arch/x86/lib/atomic64_32.o:

   text    data     bss     dec     hex filename
   1763       0       0    1763     6e3 atomic64_32.o.before
    435       0       0     435     1b3 atomic64_32.o.after

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Improve atomic64_add_return()
Ingo Molnar [Fri, 3 Jul 2009 10:39:07 +0000 (12:39 +0200)]
x86: atomic64: Improve atomic64_add_return()

Linus noted (based on Eric Dumazet's numbers) that we would
probably be better off not trying an atomic_read() in
atomic64_add_return() but intead intentionally let the first
cmpxchg8b fail - to get a cache-friendly 'give me ownership
of this cacheline' transaction. That can then be followed
by the real cmpxchg8b which sets the value local to the CPU.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Improve cmpxchg8b()
Eric Dumazet [Fri, 3 Jul 2009 11:26:41 +0000 (13:26 +0200)]
x86: atomic64: Improve cmpxchg8b()

Rewrite cmpxchg8b() to not use %edi register but a generic "+m"
constraint, to increase compiler freedom in code generation and
possibly better code.

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Improve atomic64_read()
Eric Dumazet [Fri, 3 Jul 2009 10:14:27 +0000 (12:14 +0200)]
x86: atomic64: Improve atomic64_read()

Linus noticed that the 32-bit version of atomic64_read() was
being overly complex with re-reading the value and doing a
retry loop over that.

Instead we can just rely on cmpxchg8b returning either the new
value or returning the current value.

We can use any 'old' value, which will be faster as it can be
loaded via immediates. Using some value that is not equal to
the real value in memory the instruction gets faster.

This also has the advantage that the CPU could avoid dirtying
the cacheline.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: Move the 32-bit atomic64_t implementation to a .c file
Ingo Molnar [Fri, 3 Jul 2009 11:26:39 +0000 (13:26 +0200)]
x86: atomic64: Move the 32-bit atomic64_t implementation to a .c file

Linus noted that the atomic64_t primitives are all inlines
currently which is crazy because these functions have a large
register footprint anyway.

Move them to a separate file: arch/x86/lib/atomic64_32.c

Also, while at it, rename all uses of 'unsigned long long' to
the much shorter u64.

This makes the appearance of the prototypes a lot nicer - and
it also uncovered a few bugs where (yet unused) API variants
had 'long' as their return type instead of u64.

[ More intrusive changes are not yet done in this patch. ]

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agox86: atomic64: The atomic64_t data type should be 8 bytes aligned on 32-bit too
Eric Dumazet [Thu, 2 Jul 2009 22:08:26 +0000 (00:08 +0200)]
x86: atomic64: The atomic64_t data type should be 8 bytes aligned on 32-bit too

Locked instructions on two cache lines at once are painful. If
atomic64_t uses two cache lines, my test program is 10x slower.

The chance for that is significant: 4/32 or 12.5%.

Make sure an atomic64_t is 8 bytes aligned.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain>
[ changed it to __aligned(8) as per Andrew's suggestion ]
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Annotate variable initialization
Ingo Molnar [Fri, 3 Jul 2009 11:17:28 +0000 (13:17 +0200)]
perf report: Annotate variable initialization

Certain versions of GCC dont see the initialization that is done here:

  builtin-report.c: In function ‘__cmd_report’:
  builtin-report.c:1038: warning: ‘syms’ may be used uninitialized in this function

So annotate it with a NULL initialization.

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Adjust symbols in ET_EXEC files too
Arnaldo Carvalho de Melo [Fri, 3 Jul 2009 00:24:14 +0000 (21:24 -0300)]
perf_counter tools: Adjust symbols in ET_EXEC files too

Ingo Molnar wrote:

> i just bisected a 'perf report' bug that would cause us to not
> resolve all user-space symbols in a 'git gc' run to:
>
f5812a7a336fb952d819e4427b9a2dce02368e82 is first bad commit
> commit f5812a7a336fb952d819e4427b9a2dce02368e82
> Author: Arnaldo Carvalho de Melo <acme@redhat.com>
> Date:   Tue Jun 30 11:43:17 2009 -0300
>
>     perf_counter tools: Adjust only prelinked symbol's addresses

Rename ->prelinked to ->adjust_symbols and making what was done
only for prelinked libraries also to ET_EXEC binaries, such as
/usr/bin/git:

[acme@doppio pahole]$ readelf -h /usr/bin/git | grep Type
  Type:                              EXEC (Executable file)
[acme@doppio pahole]$

And after installing the 'git-debuginfo' package, I get correct results:

[acme@doppio linux-2.6-tip]$ perf report --sort comm,dso,symbol -d /usr/bin/git | head -20

 #
 # (1139614 samples)
 #
 # Overhead           Command  Shared Object              Symbol
 # ........  ................  .........................  ......
 #
    34.98%               git  /usr/bin/git               [.] send_sideband
    33.39%               git  /usr/bin/git               [.] enter_repo
     6.81%               git  /usr/bin/git               [.] diff_opt_parse
     4.95%               git  /usr/bin/git               [.] is_repository_shallow
     3.24%               git  /usr/bin/git               [.] odb_mkstemp
     1.39%               git  /usr/bin/git               [.] output
     1.34%               git  /usr/bin/git               [.] xmmap
     1.25%               git  /usr/bin/git               [.] receive_pack_config
     1.16%               git  /usr/bin/git               [.] git_pathdup
     0.90%               git  /usr/bin/git               [.] read_object_with_reference
     0.86%               git  /usr/bin/git               [.] show_patch_diff
     0.85%               git  /usr/bin/git               0x00000000095e2e
     0.69%               git  /usr/bin/git               [.] display
[acme@doppio linux-2.6-tip]$

I'll check what are the last cases where we can't resolve symbols, like
this 0x00000000095e2e later.

And I guess this will fix the problems Mike were seeing too:

 [acme@doppio linux-2.6-tip]$ readelf -h ../build/perf/vmlinux | grep Type
   Type:                              EXEC (Executable file)
 [acme@doppio linux-2.6-tip]$

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Display percents of hits in callchain with overhead colors
Frederic Weisbecker [Thu, 2 Jul 2009 18:14:35 +0000 (20:14 +0200)]
perf_counter tools: Display percents of hits in callchain with overhead colors

This adds the use of colors to signal at a glance the important
overhead thresholds in callchains hit rates.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246558475-10624-3-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Provide helper to print percents color
Frederic Weisbecker [Thu, 2 Jul 2009 18:14:34 +0000 (20:14 +0200)]
perf_counter tools: Provide helper to print percents color

Among perf annotate, perf report and perf top, we can find the
common colored printing of percents according to the following
rules:

    High overhead =  > 5%, colored in red
    Mid overhead =  > 0.5%, colored in green
    Low overhead =  < 0.5%, default color

Factorize these multiple checks in a single function named
percent_color_fprintf() and also provide a get_percent_color()
for sites which print percentages and other things at the same
time.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246558475-10624-2-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Set the minimum percent for callchains to be displayed
Frederic Weisbecker [Thu, 2 Jul 2009 18:14:33 +0000 (20:14 +0200)]
perf_counter tools: Set the minimum percent for callchains to be displayed

Callchains output may become a burden on a trace because even
rarely hit site are exposed. This can be too much information.

Let the user set a threshold as a minimum percent of hits using
the new pattern for the -c option:

    -c mode,min_percent

Example:

$ perf report -s sym -c flat,4

     8.25%  [k] copy_user_generic_string
             4.19%
                copy_user_generic_string
                generic_file_aio_read
                do_sync_read
                vfs_read
                sys_pread64
                system_call_fastpath
                pread64

     5.39%  [k] search_by_key
     4.63%  0x00000000009e0a
     2.36%  [k] memcpy_c
[...]

$ perf report -s sym -c graph,2

     8.25%  [k] copy_user_generic_string
                |
                |--4.31%-- generic_file_aio_read
                |          do_sync_read
                |          vfs_read
                |          |
                |           --4.19%-- sys_pread64
                |                     system_call_fastpath
                |                     pread64
                |
                 --3.24%-- generic_file_buffered_write
                           __generic_file_aio_write_nolock
                           generic_file_aio_write
                           do_sync_write
                           reiserfs_file_write
                           vfs_write
                           |
                            --3.14%-- sys_pwrite64
                                      system_call_fastpath
                                      __pwrite64

     5.39%  [k] search_by_key
                |
                 --2.23%-- reiserfs_update_sd_size

     4.63%  0x00000000009e0a

     2.36%  [k] memcpy_c
[...]

You can also omit it and it will default to 0.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246558475-10624-1-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Add support for callchain graph output
Frederic Weisbecker [Thu, 2 Jul 2009 15:58:21 +0000 (17:58 +0200)]
perf report: Add support for callchain graph output

Currently, the printing of callchains is done in a single
vertical level, this is the "flat" mode:

8.25%  [k] copy_user_generic_string
             4.19%
                copy_user_generic_string
                generic_file_aio_read
                do_sync_read
                vfs_read
                sys_pread64
                system_call_fastpath
                pread64

This patch introduces a new "graph" mode which provides a
hierarchical output of factorized paths recursively sorted:

 8.25%  [k] copy_user_generic_string
                |
                |--4.31%-- generic_file_aio_read
                |          do_sync_read
                |          vfs_read
                |          |
                |          |--4.19%-- sys_pread64
                |          |          system_call_fastpath
                |          |          pread64
                |          |
                |           --0.12%-- sys_read
                |                     system_call_fastpath
                |                     __read
                |
                |--3.24%-- generic_file_buffered_write
                |          __generic_file_aio_write_nolock
                |          generic_file_aio_write
                |          do_sync_write
                |          reiserfs_file_write
                |          vfs_write
                |          |
                |          |--3.14%-- sys_pwrite64
                |          |          system_call_fastpath
                |          |          __pwrite64
                |          |
                |           --0.10%-- sys_write
[...]

The command line has then changed.

By providing the -c option, the callchain will output in the
flat mode by default.

But you can override it:

    perf report -c graph

or

    perf report -c flat

You can also pass the abreviated mode:

    perf report -c g

or

    perf report -c gra

will both make use of the graph mode.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246550301-8954-3-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Add new OPT_CALLBACK_DEFAULT option
Frederic Weisbecker [Thu, 2 Jul 2009 15:58:20 +0000 (17:58 +0200)]
perf_counter tools: Add new OPT_CALLBACK_DEFAULT option

There is no predefined macro to create an option that can have
a custom value or a default one if none is given.

This patch provides a new helper OPT_CALLBACK_DEFAULT() which
defines such kind of option.

For example, considering an option -c, we want to get the
default value in the following cases:

    perf command -c -d
    perf command -d -c

And the foo value when it's given:

    perf command -c foo -d
    perf command -d -c foo

That's also why PARSE_OPT_LASTARG_DEFAULT is extended here to
support default values whatever the position of the option, not
only in the end.

Should it now be renamed to PARSE_OPT_ARG_DEFAULT ?

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: git@vger.kernel.org
LKML-Reference: <1246550301-8954-2-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Create new chain_for_each_child() iterator
Frederic Weisbecker [Thu, 2 Jul 2009 15:58:19 +0000 (17:58 +0200)]
perf_counter tools: Create new chain_for_each_child() iterator

Iterating through children of a node in the callchain tree
shows something that may be quite confusing at a first glance.
The head is the children field of the parent and the list nodes
are in the brothers field of the children.

This is because the childs are linked to the parent as a list
of "brothers" using the "children" list of the parent as a
head:

  ---------------
 | Parent (head) |-------------------------------------
  ---------------                                      |
     |                                                 |
  children                                             |
     |                                                 |
  -----------               -----------                |
 | 1st child |---brother---| 2nd child |---brother-----
  -----------               -----------

This makes the following strange pattern often occuring:

 list_for_each_entry(child, &parent->children, brothers) {
        // do something with children
 }

Abstract it to chain_for_each_child() to factorize and simplify
this pattern.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246550301-8954-1-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Enable kernel module symbol loading in tools
Mike Galbraith [Thu, 2 Jul 2009 06:09:46 +0000 (08:09 +0200)]
perf_counter tools: Enable kernel module symbol loading in tools

Add the -m/--modules option to perf report and perf annotate,
which enables live module symbol/image loading. To be used
with -k/--vmlinux.

(Also give perf annotate a -P/--full-paths option.)

Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246514986.13293.48.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Connect module support infrastructure to symbol loading infrastru...
Mike Galbraith [Thu, 2 Jul 2009 06:08:36 +0000 (08:08 +0200)]
perf_counter tools: Connect module support infrastructure to symbol loading infrastructure

Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246514916.13293.46.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Add infrastructure to support loading of kernel module symbols
Mike Galbraith [Thu, 2 Jul 2009 06:07:10 +0000 (08:07 +0200)]
perf_counter tools: Add infrastructure to support loading of kernel module symbols

Add infrastructure for module path discovery and section load addresses.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246514830.13293.44.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Make symbol loading consistently return number of loaded symbols
Mike Galbraith [Thu, 2 Jul 2009 06:05:58 +0000 (08:05 +0200)]
perf_counter tools: Make symbol loading consistently return number of loaded symbols

perf_counter tools: Make symbol loading consistently return number of loaded symbols.

Signed-off-by: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246514758.13293.42.camel@marge.simson.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf stat: Handle pipe read failures in perf stat
Frederic Weisbecker [Wed, 1 Jul 2009 19:02:10 +0000 (21:02 +0200)]
perf stat: Handle pipe read failures in perf stat

Building builtin-stat.c reports the following errors:

cc1: warnings being treated as errors
builtin-stat.c: In function ‘run_perf_stat’:
builtin-stat.c:242: erreur: ignoring return value of ‘read’, declared with attribute warn_unused_result
builtin-stat.c:255: erreur: ignoring return value of ‘read’, declared with attribute warn_unused_result
make: *** [builtin-stat.o] Erreur 1

This patch handles the possible pipe read failures.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246474930-6088-2-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter: Ignore the nmi call frames in the x86-64 backtraces
Frederic Weisbecker [Wed, 1 Jul 2009 19:02:09 +0000 (21:02 +0200)]
perf_counter: Ignore the nmi call frames in the x86-64 backtraces

About every callchains recorded with perf record are filled up
including the internal perfcounter nmi frame:

 perf_callchain
 perf_counter_overflow
 intel_pmu_handle_irq
 perf_counter_nmi_handler
 notifier_call_chain
 atomic_notifier_call_chain
 notify_die
 do_nmi
 nmi

We want ignore this frame as it's not interesting for
instrumentation. To solve this, we simply ignore every frames
from nmi context.

New example of "perf report -s sym -c" after this patch:

9.59%  [k] search_by_key
             4.88%
                search_by_key
                reiserfs_read_locked_inode
                reiserfs_iget
                reiserfs_lookup
                do_lookup
                __link_path_walk
                path_walk
                do_path_lookup
                user_path_at
                vfs_fstatat
                vfs_lstat
                sys_newlstat
                system_call_fastpath
                __lxstat
                0x406fb1

             3.19%
                search_by_key
                search_by_entry_key
                reiserfs_find_entry
                reiserfs_lookup
                do_lookup
                __link_path_walk
                path_walk
                do_path_lookup
                user_path_at
                vfs_fstatat
                vfs_lstat
                sys_newlstat
                system_call_fastpath
                __lxstat
                0x406fb1
[...]

For now this patch only solves the problem in x86-64.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246474930-6088-1-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Share list.h with the kernel
Arnaldo Carvalho de Melo [Wed, 1 Jul 2009 17:46:08 +0000 (14:46 -0300)]
perf_counter tools: Share list.h with the kernel

The copy we were using came from another copy I did for the dwarves
(pahole) package, that came from the kernel years ago.

The only function that is used by the perf tools and that isn't in the
kernel is list_del_range, that I'm leaving in the perf tools only for
now.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
LKML-Reference: <20090701174608.GA5823@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Share rbtree.with the kernel
Arnaldo Carvalho de Melo [Wed, 1 Jul 2009 15:28:37 +0000 (12:28 -0300)]
perf_counter tools: Share rbtree.with the kernel

The tools/perf/util/rbtree.c copy already drifted by three
csets:

 4b324126e0c6c3a5080ca3ec0981e8766ed6f1ee
 4c60117811171d867d4f27f17ea07d7419d45dae
 16c047add3ceaf0ab882e3e094d1ec904d02312d

So remove the copy and use the lib/rbtree.c directly, sharing
the source code while still generating a separate object file,
since tools/perf uses a far more agressive -O6 switch.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <20090701152837.GG15682@ghostprotocols.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf list: Add cache events
Jaswinder Singh Rajput [Wed, 1 Jul 2009 13:06:18 +0000 (18:36 +0530)]
perf list: Add cache events

After:

$ ./perf list

List of pre-defined events (to be used in -e):

  cpu-cycles OR cycles                     [Hardware event]
  instructions                             [Hardware event]
  cache-references                         [Hardware event]
  cache-misses                             [Hardware event]
  branch-instructions OR branches          [Hardware event]
  branch-misses                            [Hardware event]
  bus-cycles                               [Hardware event]

  cpu-clock                                [Software event]
  task-clock                               [Software event]
  page-faults OR faults                    [Software event]
  minor-faults                             [Software event]
  major-faults                             [Software event]
  context-switches OR cs                   [Software event]
  cpu-migrations OR migrations             [Software event]

  L1-d$-loads                              [Hardware cache event]
  L1-d$-load-misses                        [Hardware cache event]
  L1-d$-stores                             [Hardware cache event]
  L1-d$-store-misses                       [Hardware cache event]
  L1-d$-prefetches                         [Hardware cache event]
  L1-d$-prefetch-misses                    [Hardware cache event]
  L1-i$-loads                              [Hardware cache event]
  L1-i$-load-misses                        [Hardware cache event]
  L1-i$-prefetches                         [Hardware cache event]
  L1-i$-prefetch-misses                    [Hardware cache event]
  LLC-loads                                [Hardware cache event]
  LLC-load-misses                          [Hardware cache event]
  LLC-stores                               [Hardware cache event]
  LLC-store-misses                         [Hardware cache event]
  LLC-prefetches                           [Hardware cache event]
  LLC-prefetch-misses                      [Hardware cache event]
  dTLB-loads                               [Hardware cache event]
  dTLB-load-misses                         [Hardware cache event]
  dTLB-stores                              [Hardware cache event]
  dTLB-store-misses                        [Hardware cache event]
  dTLB-prefetches                          [Hardware cache event]
  dTLB-prefetch-misses                     [Hardware cache event]
  iTLB-loads                               [Hardware cache event]
  iTLB-load-misses                         [Hardware cache event]
  branch-loads                             [Hardware cache event]
  branch-load-misses                       [Hardware cache event]

  rNNN                                     [raw hardware event descriptor]

Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <1246453578.3072.1.camel@ht.satnam>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf stat: Define MATCH_EVENT for easy attr checking
Jaswinder Singh Rajput [Wed, 1 Jul 2009 09:35:09 +0000 (15:05 +0530)]
perf stat: Define MATCH_EVENT for easy attr checking

MATCH_EVENT is useful:

 1. for multiple attrs checking
 2. avoid repetition of PERF_TYPE_ and PERF_COUNT_ and save space
 3. avoids line breakage

Signed-off-by: Jaswinder Singh Rajput <jaswinderrajput@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
LKML-Reference: <1246440909.3403.5.camel@hpdv5.satnam>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Add more warnings and fix/annotate them
Ingo Molnar [Wed, 1 Jul 2009 10:37:06 +0000 (12:37 +0200)]
perf_counter tools: Add more warnings and fix/annotate them

Enable -Wextra. This found a few real bugs plus a number
of signed/unsigned type mismatches/uncleanlinesses. It
also required a few annotations

All things considered it was still worth it so lets try with
this enabled for now.

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf report: Fix HV bit mismerge
Ingo Molnar [Wed, 1 Jul 2009 09:17:20 +0000 (11:17 +0200)]
perf report: Fix HV bit mismerge

Fix:

 builtin-report.c: In function ‘hist_entry__add’:
 builtin-report.c:1015: error: case label not within a switch statement
 builtin-report.c:1017: error: break statement not within loop or switch

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Rework event string parsing/syntax
Paul Mackerras [Wed, 1 Jul 2009 03:04:34 +0000 (13:04 +1000)]
perf_counter tools: Rework event string parsing/syntax

This reworks the parser for event descriptors to make it more
consistent in what it accepts.  It is now structured as a
recursive descent parser for the following grammar:

events ::= event ( ("," | space) space* event )*
event ::= ( raw_event | numeric_event | symbolic_event |
      generic_hw_event ) [ event_modifier ]
raw_event ::= "r" hex_number
numeric_event ::= number ":" number
number ::= decimal_number | "0x" hex_number | "0" octal_number
symbolic_event ::= string_from_event_symbols_array
generic_hw_event::= cache_type ( "-" ( cache_op | cache_result ) )*
event_modifier ::= ":" ( "u" | "k" | "h" )+

with the extra restriction that you can have at most one
cache_op and at most one cache_result.

We pass the current string pointer by reference (i.e. as a
const char **) to the various parsing functions so that they
can advance the pointer to indicate how much they consumed.
They return 0 if they didn't recognize the thing at the pointer
or 1 if they did (and advance the pointer past it).

This also fixes parse_aliases to take the longest matching
alias from the table, not the first one.  Otherwise "l1-data"
would match the "l1-d" alias and the "ata" would not be
consumed.

This allows event modifiers indicating what processor modes to
count in to be applied to any event, not just numeric events,
and adds a ":h" modifier to indicate counting in hypervisor
mode.  Specifying ":u" now sets both exclude_kernel and
exclude_hv, and so on.  Multiple modes can be specified, e.g.
":uk" will count in user or hypervisor mode (i.e. only
exclude_kernel will be set).

Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
LKML-Reference: <19018.53826.843815.189847@cargo.ozlabs.ibm.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agopowerpc/perf_counter: Enable alternate PR/HV bits for POWER7
Anton Blanchard [Wed, 1 Jul 2009 03:07:01 +0000 (13:07 +1000)]
powerpc/perf_counter: Enable alternate PR/HV bits for POWER7

POWER7 has the same PR/HV bit layout as POWER6, so set the flag.

Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Paul Mackerras <paulus@samba.org>
Cc: a.p.zijlstra@chello.nl
Cc: benh@kernel.crashing.org
LKML-Reference: <20090701030701.GI3563@kryten>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Various fixes for callchains
Frederic Weisbecker [Wed, 1 Jul 2009 03:35:15 +0000 (05:35 +0200)]
perf_counter tools: Various fixes for callchains

The symbol resolving has of course revealed some bugs in the
callchain tree handling. This patch fixes some of them,
including:

- inherit the children from the parents while splitting a node
- fix list range moving
- fix indexes setting in callchains
- create a child on the current node if the path doesn't match in
  the existent children (was only done on the root)
- compare using symbols when possible so that we can match a function
  using any ip inside by referring to its start address.

The practical effects are:

- remove double callchains
- fix upside down or any random order of callchains
- fix wrong paths
- fix bad hits and percentage accounts

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246419315-9968-4-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Resolve symbols in callchains
Frederic Weisbecker [Wed, 1 Jul 2009 03:35:14 +0000 (05:35 +0200)]
perf_counter tools: Resolve symbols in callchains

This patch resolves the names, when possible, of each ip
present in the callchains while using the -c option with perf
report.

Example:

5.40%  [k] __d_lookup
             5.37%
                perf_callchain
                perf_counter_overflow
                intel_pmu_handle_irq
                perf_counter_nmi_handler
                notifier_call_chain
                atomic_notifier_call_chain
                notify_die
                do_nmi
                nmi
                do_lookup
                __link_path_walk
                path_walk
                do_path_lookup
                user_path_at
                sys_faccessat
                sys_access
                system_call_fastpath
                0x7fb609846f77

             0.01%
                perf_callchain
                perf_counter_overflow
                intel_pmu_handle_irq
                perf_counter_nmi_handler
                notifier_call_chain
                atomic_notifier_call_chain
                notify_die
                do_nmi
                nmi
                do_lookup
                __link_path_walk
                path_walk
                do_path_lookup
                user_path_at
                sys_faccessat

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246419315-9968-3-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoperf_counter tools: Fix storage size allocation of callchain list
Frederic Weisbecker [Wed, 1 Jul 2009 03:35:13 +0000 (05:35 +0200)]
perf_counter tools: Fix storage size allocation of callchain list

Fix a confusion while giving the size of a callchain list
during its allocation. We are using the wrong structure size.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Anton Blanchard <anton@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
LKML-Reference: <1246419315-9968-2-git-send-email-fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoMerge branch 'linus' into perfcounters/urgent
Ingo Molnar [Wed, 1 Jul 2009 07:56:10 +0000 (09:56 +0200)]
Merge branch 'linus' into perfcounters/urgent

Merge reason: this branch was on a .30-ish base before, update
              it to an almost-.31-rc2 upstream base to pick up fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
15 years agoMerge branch 'kmemleak' of git://linux-arm.org/linux-2.6
Linus Torvalds [Wed, 1 Jul 2009 02:04:53 +0000 (19:04 -0700)]
Merge branch 'kmemleak' of git://linux-arm.org/linux-2.6

* 'kmemleak' of git://linux-arm.org/linux-2.6:
  kmemleak: Inform kmemleak about pid_hash
  kmemleak: Do not warn if an unknown object is freed
  kmemleak: Do not report new leaked objects if the scanning was stopped
  kmemleak: Slightly change the policy on newly allocated objects
  kmemleak: Do not trigger a scan when reading the debug/kmemleak file
  kmemleak: Simplify the reports logged by the scanning thread
  kmemleak: Enable task stacks scanning by default
  kmemleak: Allow the early log buffer to be configurable.

15 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm
Linus Torvalds [Wed, 1 Jul 2009 02:04:14 +0000 (19:04 -0700)]
Merge git://git./linux/kernel/git/agk/linux-2.6-dm

* git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm:
  dm table: fix blk_stack_limits arg to use bytes not sectors
  dm exception store: really fix type lookup

15 years agoMerge branch 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux...
Linus Torvalds [Wed, 1 Jul 2009 02:02:59 +0000 (19:02 -0700)]
Merge branch 'perfcounters-fixes-for-linus' of git://git./linux/kernel/git/tip/linux-2.6-tip

* 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (47 commits)
  perf report: Add --symbols parameter
  perf report: Add --comms parameter
  perf report: Add --dsos parameter
  perf_counter tools: Adjust only prelinked symbol's addresses
  perf_counter: Provide a way to enable counters on exec
  perf_counter tools: Reduce perf stat measurement overhead/skew
  perf stat: Use percentages for scaling output
  perf_counter, x86: Update x86_pmu after WARN()
  perf stat: Micro-optimize the code: memcpy is only required if no event is selected and !null_run
  perf stat: Improve output
  perf stat: Fix multi-run stats
  perf stat: Add -n/--null option to run without counters
  perf_counter tools: Remove dead code
  perf_counter: Complete counter swap
  perf report: Print sorted callchains per histogram entries
  perf_counter tools: Prepare a small callchain framework
  perf record: Fix unhandled io return value
  perf_counter tools: Add alias for 'l1d' and 'l1i'
  perf-report: Add bare minimum PERF_EVENT_READ parsing
  perf-report: Add modes for inherited stats and no-samples
  ...

15 years agoMerge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
Linus Torvalds [Wed, 1 Jul 2009 02:01:52 +0000 (19:01 -0700)]
Merge branch 'release' of git://git./linux/kernel/git/aegl/linux-2.6

* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6:
  Add Fenghua Yu as temporary co-maintainer for ia64
  [IA64] address compiler warnings perfmon.c/salinfo.c
  [IA64] Remove unnecessary semicolons
  [IA64] sprintf should not be used with same source & destination address

15 years agoMN10300: Wire up new syscalls
David Howells [Tue, 30 Jun 2009 21:33:15 +0000 (22:33 +0100)]
MN10300: Wire up new syscalls

Wire up new syscalls rt_tgsigqueueinfo and perf_counter_open.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agoFRV: Wire up new syscalls
David Howells [Tue, 30 Jun 2009 21:24:54 +0000 (22:24 +0100)]
FRV: Wire up new syscalls

Wire up new syscalls rt_tgsigqueueinfo and perf_counter_open.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agohostfs: set maximum filesize in superblock for proper LFS support
Wolfgang Illmeyer [Tue, 30 Jun 2009 18:41:44 +0000 (11:41 -0700)]
hostfs: set maximum filesize in superblock for proper LFS support

Maximum file size for hostfs mounts defaults to 2GB, so bigger files cannot be
read/written through hostfs. This patch initializes the maximum file size to
MAX_LFS_SIZE.

Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13531

Signed-off-by: Wolfgang Illmeyer <wolfgang@illmeyer.com>
Cc: Jeff Dike <jdike@addtoit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agofloppy: fix lock imbalance
Jiri Slaby [Tue, 30 Jun 2009 18:41:44 +0000 (11:41 -0700)]
floppy: fix lock imbalance

A crappy macro prevents us unlocking on a fail path.

Expand the macro and unlock appropriatelly.

Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agobfin: delay IRQ registration until driver is ready
Mike Frysinger [Tue, 30 Jun 2009 18:41:43 +0000 (11:41 -0700)]
bfin: delay IRQ registration until driver is ready

Make sure we do not actually request the RTC IRQ until the device driver
is fully ready to handle and process any interrupt.  This way a spurious
interrupt won't crash the system (which may happen if the bootloader was
poking the RTC right before booting Linux).

Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Alessandro Zummo <a.zummo@towertech.it>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agoatyfb: fix alignment for block writes
Ville Syrjala [Tue, 30 Jun 2009 18:41:42 +0000 (11:41 -0700)]
atyfb: fix alignment for block writes

Block writes require 64 byte alignment.  Since block writes could be used
with SGRAM or WRAM also refine the memory type detection to check for
either type before deciding to use the 64 byte alignment.

Signed-off-by: Ville Syrjala <syrjala@sci.fi>
Tested-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: Krzysztof Helt <krzysztof.h1@poczta.fm>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agoatyfb: fix HP OmniBook 500 reboot hang
Ville Syrjala [Tue, 30 Jun 2009 18:41:40 +0000 (11:41 -0700)]
atyfb: fix HP OmniBook 500 reboot hang

Apparently HP OmniBook 500's BIOS doesn't like the way atyfb reprograms
the hardware. The BIOS will simply hang after a reboot. Fix the problem
by restoring the hardware to it's original state on reboot.

Signed-off-by: Ville Syrjala <syrjala@sci.fi>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: Krzysztof Helt <krzysztof.h1@poczta.fm>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agogpio: pl061: fix IRQ handling for GPIOs >= PL061_GPIO_NR
Baruch Siach [Tue, 30 Jun 2009 18:41:39 +0000 (11:41 -0700)]
gpio: pl061: fix IRQ handling for GPIOs >= PL061_GPIO_NR

IRQ handling is wrong for any GPIO >= PL061_GPIO_NR.

Fix this by implementing and using a proper .to_irq method.

Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Cc: David Brownell <david-b@pacbell.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agogpio: pl061: fix probe error handling code
Baruch Siach [Tue, 30 Jun 2009 18:41:38 +0000 (11:41 -0700)]
gpio: pl061: fix probe error handling code

Note that IRQ has not been initialized when kmalloc() fails.

Also, use DECLARE_BITMAP() to make the code clearer.

Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Cc: David Brownell <david-b@pacbell.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agox86: only clear node_states for 64bit
Yinghai Lu [Tue, 30 Jun 2009 18:41:37 +0000 (11:41 -0700)]
x86: only clear node_states for 64bit

Nathan reported that

| commit 73d60b7f747176dbdff826c4127d22e1fd3f9f74
| Author: Yinghai Lu <yinghai@kernel.org>
| Date:   Tue Jun 16 15:33:00 2009 -0700
|
|    page-allocator: clear N_HIGH_MEMORY map before we set it again
|
|    SRAT tables may contains nodes of very small size.  The arch code may
|    decide to not activate such a node.  However, currently the early boot
|    code sets N_HIGH_MEMORY for such nodes.  These nodes therefore seem to be
|    active although these nodes have no present pages.
|
|    For 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too

unintentionally and incorrectly clears the cpuset.mems cgroup attribute on
an i386 kvm guest, meaning that cpuset.mems can not be used.

Fix this by only clearing node_states[N_NORMAL_MEMORY] for 64bit only.
and need to do save/restore for that in find_zone_movable_pfn

Reported-by: Nathan Lynch <ntl@pobox.com>
Tested-by: Nathan Lynch <ntl@pobox.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@elte.hu>,
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agocpusets: document adding/removing cpus to cpuset elaborately
Nikanth Karthikesan [Tue, 30 Jun 2009 18:41:36 +0000 (11:41 -0700)]
cpusets: document adding/removing cpus to cpuset elaborately

By writing a tasks's pid to the file, a process adds that task to that
cgroup/cpuset.  But to add a cpu/mem to a cpuset, the new list of cpus
should be written to the cpuset.mems file which would replace the old list
of cpus.  Make this clearer in the documentation.

Signed-off-by: Nikanth Karthikesan <knikanth@suse.de>
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Paul Menage <menage@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agomm: prevent balance_dirty_pages() from doing too much work
Richard Kennedy [Tue, 30 Jun 2009 18:41:35 +0000 (11:41 -0700)]
mm: prevent balance_dirty_pages() from doing too much work

balance_dirty_pages can overreact and move all of the dirty pages to
writeback unnecessarily.

balance_dirty_pages makes its decision to throttle based on the number of
dirty plus writeback pages that are over the calculated limit,so it will
continue to move pages even when there are plenty of pages in writeback
and less than the threshold still dirty.

This allows it to overshoot its limits and move all the dirty pages to
writeback while waiting for the drives to catch up and empty the writeback
list.

A simple fio test easily demonstrates this problem.

fio --name=f1 --directory=/disk1 --size=2G -rw=write --name=f2 --directory=/disk2 --size=1G --rw=write --startdelay=10

This is the simplest fix I could find, but I'm not entirely sure that it
alone will be enough for all cases.  But it certainly is an improvement on
my desktop machine writing to 2 disks.

Do we need something more for machines with large arrays where
bdi_threshold * number_of_drives is greater than the dirty_ratio ?

Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agobsdacct: fix access to invalid filp in acct_on()
Renaud Lottiaux [Tue, 30 Jun 2009 18:41:34 +0000 (11:41 -0700)]
bsdacct: fix access to invalid filp in acct_on()

The file opened in acct_on and freshly stored in the ns->bacct struct can
be closed in acct_file_reopen by a concurrent call after we release
acct_lock and before we call mntput(file->f_path.mnt).

Record file->f_path.mnt in a local variable and use this variable only.

Signed-off-by: Renaud Lottiaux <renaud.lottiaux@kerlabs.com>
Signed-off-by: Louis Rilling <louis.rilling@kerlabs.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agoMAINTAINERS: STARFIRE/DURALAN update
Joe Perches [Tue, 30 Jun 2009 18:41:32 +0000 (11:41 -0700)]
MAINTAINERS: STARFIRE/DURALAN update

Ion's cs.columbia.edu email address no longer works.

Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Ion Badulescu <ionut@badula.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agokernel/resource.c: fix sign extension in reserve_setup()
Zhang Rui [Tue, 30 Jun 2009 18:41:31 +0000 (11:41 -0700)]
kernel/resource.c: fix sign extension in reserve_setup()

When the 32-bit signed quantities get assigned to the u64 resource_size_t,
they are incorrectly sign-extended.

Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13253
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=9905

Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Reported-by: Leann Ogasawara <leann@ubuntu.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Reported-by: <pablomme@googlemail.com>
Tested-by: <pablomme@googlemail.com>
Cc: <stable@kernel.org>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agospi: bitbang bugfix in message setup
David Brownell [Tue, 30 Jun 2009 18:41:30 +0000 (11:41 -0700)]
spi: bitbang bugfix in message setup

Bugfix to spi_bitbang infrastructure: make sure to always set transfer
parameters on the first pass through the message's per-transfer loop.
This can matter with drivers that replace the per-word or per-buffer
transfer primitives, on busses with multiple SPI devices.

Previously, this could have started messages using the settings left after
previous messages.  The problem was observed when a high speed chip
(m25p80 type flash) was running very slowly because a low speed device
(avr8 microcontroller) had previously used the bus.  Similar faults could
have driven the low speed device too fast, or used an unexpected word
size.

Acked-by: Steven A. Falco <sfalco@harris.com>
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agofbdev: add mutex for fb_mmap locking
Krzysztof Helt [Tue, 30 Jun 2009 18:41:29 +0000 (11:41 -0700)]
fbdev: add mutex for fb_mmap locking

Add a mutex to avoid a circular locking problem between the mm layer
semaphore and fbdev ioctl mutex through the fb_mmap() call.

Also, add mutex to all places where smem_start and smem_len fields change
so the mutex inside the fb_mmap() is actually used.  Changing of these
fields before calling the framebuffer_register() are not mutexed.

This is 2.6.31 material.  It removes one lockdep (fb_mmap() and
register_framebuffer()) but there is still another one (fb_release() and
register_framebuffer()).  It also cleans up handling of the smem_start and
smem_len fields used by mutexed section of the fb_mmap().

Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agospi: add spi_master flag word
David Brownell [Tue, 30 Jun 2009 18:41:27 +0000 (11:41 -0700)]
spi: add spi_master flag word

Add a new spi_master.flags word listing constraints relevant to that
controller.  Define the first constraint bit: a half duplex restriction.
Include that constraint in the OMAP1 MicroWire controller driver.

Have the mmc_spi host be the first customer of this flag.  Its coding
relies heavily on full duplex transfers, so it must fail when the
underlying controller driver won't perform them.

(The spi_write_then_read routine could use it too: use the
temporarily-withdrawn full-duplex speedup unless this flag is set, in
which case the existing code applies.  Similarly, any spi_master
implementing only SPI_3WIRE should set the flag.)

Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agospi: new spi->mode bits
David Brownell [Tue, 30 Jun 2009 18:41:26 +0000 (11:41 -0700)]
spi: new spi->mode bits

Add two new spi_device.mode bits to accomodate more protocol options, and
pass them through to usermode drivers:

 * SPI_NO_CS ... a second 3-wire variant, where the chipselect
   line is removed instead of a data line; transfers are still
   full duplex.

   This obviously has STRONG protocol implications since the
   chipselect transitions can't be used to synchronize state
   transitions with the SPI master.

 * SPI_READY ... defines open drain signal that's pulled low
   to pause the clock.  This defines a 5-wire variant (normal
   4-wire SPI plus READY) and two 4-wire variants (READY plus
   each of the 3-wire flavors).

   Such hardware flow control can be a big win.  There are ADC
   converters and flash chips that expose READY signals, but not
   many host controllers support it today.

The spi_bitbang code should be changed to use SPI_NO_CS instead of its
current nonportable hack.  That's a mode most hardware can easily support
(unlike SPI_READY).

Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Cc: "Paulraj, Sandeep" <s-paulraj@ti.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
15 years agodmapools: protect page_list walk in show_pools()
Thomas Gleixner [Tue, 30 Jun 2009 18:41:25 +0000 (11:41 -0700)]
dmapools: protect page_list walk in show_pools()

show_pools() walks the page_list of a pool w/o protection against the list
modifications in alloc/free.  Take pool->lock to avoid stomping into
nirvana.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>