GitHub/moto-9609/android_kernel_motorola_exynos9610.git
9 years agoperf tools: Unset perf_event_attr::freq when period term is set
Jiri Olsa [Sat, 8 Aug 2015 17:12:10 +0000 (19:12 +0200)]
perf tools: Unset perf_event_attr::freq when period term is set

We need to unset 'perf_event_attr::freq' bit (default 1) when
'period' term is specified within event definition like:

  -e 'cpu/cpu-cycles,call-graph=fp,time,period=100000'

otherwise it will handle the period value as frequency
(and fail if it crossed the maximum allowed frequency value).

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/20150808171210.GC17040@krava.brq.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Support full source file paths for srcline
Andi Kleen [Fri, 7 Aug 2015 22:24:05 +0000 (15:24 -0700)]
perf tools: Support full source file paths for srcline

For perf report/script srcline currently only the base file name of the
source file is printed. This is a good default because it usually fits
on the screen.

But in some cases we want to know the full file name, for example to
aggregate hits per file.

In the later case we need more than the base file name to resolve file
naming collisions: for example the kernel source has ~70 files named
"core.c"

It's also useful as input to post processing tools which want to point
to the right file.

Add a flag to allow full file name output.

Add an option to perf report/script to enable this option.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438986245-15191-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf callchain: Move option parsing code to util.c
Kan Liang [Thu, 6 Aug 2015 19:44:52 +0000 (15:44 -0400)]
perf callchain: Move option parsing code to util.c

Move callchain option parse related code to util.c, to avoid dragging
more object files into the python binding.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438890294-33409-1-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Move perf_counts struct and functions into separate object
Jiri Olsa [Fri, 7 Aug 2015 10:51:03 +0000 (12:51 +0200)]
perf stat: Move perf_counts struct and functions into separate object

Moving 'struct perf_counts' and associated functions into separate
object, so we could remove stat.c object dependency from python build.

It makes the python code to build properly, because it fails to load due
to missing stat-shadow.c object dependency if some patches from Kan
Liang are applied.

So apply this one, then Kan's.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/20150807105103.GB8624@krava.brq.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoMerge tag 'perf-ebpf-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Sat, 8 Aug 2015 08:05:17 +0000 (10:05 +0200)]
Merge tag 'perf-ebpf-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/core

Pull perf/ebpf library + llvm/clang infrastructure changes from Arnaldo Carvalho de Melo:

Infrastructure changes:

  - Add library for interfacing with the kernel eBPF infrastructure, with
    tools/perf/ targeted as a first user. (Wang Nan)

  - Add llvm/clang infrastructure for building BPF object files from C source
    code. (Wang Nan)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf tests: Add LLVM test for eBPF on-the-fly compiling
Wang Nan [Wed, 8 Jul 2015 10:04:02 +0000 (10:04 +0000)]
perf tests: Add LLVM test for eBPF on-the-fly compiling

Previous patches introduce llvm__compile_bpf() to compile source file to
eBPF object. This patch adds testcase to test it. It also tests libbpf
by opening generated object after applying next patch which introduces
HAVE_LIBBPF_SUPPORT option.

Since llvm__compile_bpf() prints long messages which users who don't
explicitly test llvm doesn't care, this patch set verbose to -1 to
suppress all debug, warning and error message, and hint user use 'perf
test -v' to see the full output.

For the same reason, if clang is not found in PATH and there's no [llvm]
section in .perfconfig, skip this test.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-17-git-send-email-wangnan0@huawei.com
[ Add tools/lib/bpf/ to tools/perf/MANIFEST, so that the tarball targets build ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Auto detecting kernel include options
Wang Nan [Thu, 11 Jun 2015 11:25:49 +0000 (11:25 +0000)]
perf tools: Auto detecting kernel include options

To help user find correct kernel include options, this patch extracts
them from kbuild system by an embedded script kinc_fetch_script, which
creates a temporary directory, generates Makefile and an empty dummy.o
then use the Makefile to fetch $(NOSTDINC_FLAGS), $(LINUXINCLUDE) and
$(EXTRA_CFLAGS) options. The result is passed to compiler script using
'KERNEL_INC_OPTIONS' environment variable.

Because options from kbuild contains relative path like
'Iinclude/generated/uapi', the work directory must be changed. This is
done by previous patch.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1436445342-1402-16-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Auto detecting kernel build directory
Wang Nan [Tue, 14 Jul 2015 06:40:02 +0000 (06:40 +0000)]
perf tools: Auto detecting kernel build directory

This patch detects kernel build directory by checking the existence of
include/generated/autoconf.h.

clang working directory is changed to kbuild directory if it is found,
to help user use relative include path. Following patch will detect
kernel include directory, which contains relative include patch so this
workdir changing is needed.

Users are allowed to set 'kbuild-dir = ""' manually to disable this
checking.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/tip-owyfwfbemrjn0tlj6tgk2nf5@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Call clang to compile C source to object code
Wang Nan [Thu, 11 Jun 2015 10:31:09 +0000 (10:31 +0000)]
perf tools: Call clang to compile C source to object code

This is the core patch for supporting eBPF on-the-fly compiling, does
the following work:

 1. Search clang compiler using search_program().

 2. Run command template defined in llvm-bpf-cmd-template option in
    [llvm] config section using read_from_pipe(). Patch of clang and
    source code path is injected into shell command using environment
    variable using force_set_env().

  Commiter notice:

  When building with DEBUG=1 we get a compiler error that gets fixed with
  the same approach described in commit b236512280fb:

    perf kmem: Fix compiler warning about may be accessing uninitialized variable

    The last argument to strtok_r doesn't need to be initialized, its
    just a placeholder to make this routine reentrant, but gcc doesn't know
    about that and complains, breaking the build, fix it by setting it to
    NULL.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-14-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Introduce llvm config options
Wang Nan [Tue, 21 Jul 2015 11:13:34 +0000 (11:13 +0000)]
perf tools: Introduce llvm config options

This patch introduces [llvm] config section with 5 options. Following
patches will use then to config llvm dynamica compiling.

'llvm-utils.[ch]' is introduced in this patch for holding all
llvm/clang related stuffs.

Example:

  [llvm]
        # Path to clang. If omit, search it from $PATH.
clang-path = "/path/to/clang"

        # Cmdline template. Following line shows its default value.
        # Environment variable is used to passing options.
        #
        # *NOTE*: -D__KERNEL__ MUST appears before $CLANG_OPTIONS,
        # so user have a chance to use -U__KERNEL__ in $CLANG_OPTIONS
        # to cancel it.
clang-bpf-cmd-template = "$CLANG_EXEC -D__KERNEL__ $CLANG_OPTIONS \
  $KERNEL_INC_OPTIONS -Wno-unused-value \
  -Wno-pointer-sign -working-directory \
  $WORKING_DIR  -c $CLANG_SOURCE -target \
  bpf -O2 -o -"

        # Options passed to clang, will be passed to cmdline by
        # $CLANG_OPTIONS.
clang-opt = "-Wno-unused-value -Wno-pointer-sign"

        # kbuild directory. If not set, use /lib/modules/`uname -r`/build.
        # If set to "" deliberately, skip kernel header auto-detector.
kbuild-dir = "/path/to/kernel/build"

        # Options passed to 'make' when detecting kernel header options.
kbuild-opts = "ARCH=x86_64"

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1437477214-149684-1-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Link all bpf objects onto a list
Wang Nan [Wed, 1 Jul 2015 02:14:10 +0000 (02:14 +0000)]
bpf tools: Link all bpf objects onto a list

To allow enumeration of all bpf_objects, keep them in a list (hidden to
caller). bpf_object__for_each_safe() is introduced to do this iteration.
It is safe even user close the object during iteration.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-23-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Introduce accessors for struct bpf_program
Wang Nan [Wed, 1 Jul 2015 02:14:08 +0000 (02:14 +0000)]
bpf tools: Introduce accessors for struct bpf_program

This patch introduces accessors for user of libbpf to retrieve section
name and fd of a opened/loaded eBPF program. 'struct bpf_prog_handler'
is used for that purpose. Accessors of programs section name and file
descriptor are provided. Set/get private data are also impelmented.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Link: http://lkml.kernel.org/r/1435716878-189507-21-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Load eBPF programs in object files into kernel
Wang Nan [Wed, 1 Jul 2015 02:14:07 +0000 (02:14 +0000)]
bpf tools: Load eBPF programs in object files into kernel

This patch utilizes previous introduced bpf_load_program to load
programs in the ELF file into kernel. Result is stored in 'fd' field in
'struct bpf_program'.

During loading, it allocs a log buffer and free it before return.  Note
that that buffer is not passed to bpf_load_program() if the first
loading try is successful. Doesn't use a statically allocated log buffer
to avoid potention multi-thread problem.

Instructions collected during opening is cleared after loading.

load_program() is created for loading a 'struct bpf_insn' array into
kernel, bpf_program__load() calls it. By this design we have a function
loads instructions into kernel. It will be used by further patches,
which creates different instances from a program and load them into
kernel.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-20-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Introduce bpf_load_program() to bpf.c
Wang Nan [Wed, 1 Jul 2015 02:14:06 +0000 (02:14 +0000)]
bpf tools: Introduce bpf_load_program() to bpf.c

bpf_load_program() can be used to load bpf program into kernel. To make
loading faster, first try to load without logbuf. Try again with logbuf
if the first try failed.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-19-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Relocate eBPF programs
Wang Nan [Wed, 1 Jul 2015 02:14:05 +0000 (02:14 +0000)]
bpf tools: Relocate eBPF programs

If an eBPF program accesses a map, LLVM generates a load instruction
which loads an absolute address into a register, like this:

  ld_64   r1, <MCOperand Expr:(mymap)>
  ...
  call    2

That ld_64 instruction will be recorded in relocation section.
To enable the usage of that map, relocation must be done by replacing
the immediate value by real map file descriptor so it can be found by
eBPF map functions.

This patch to the relocation work based on information collected by
patches:

'bpf tools: Collect symbol table from SHT_SYMTAB section',
'bpf tools: Collect relocation sections from SHT_REL sections'
and
'bpf tools: Record map accessing instructions for each program'.

For each instruction which needs relocation, it inject corresponding
file descriptor to imm field. As a part of protocol, src_reg is set to
BPF_PSEUDO_MAP_FD to notify kernel this is a map loading instruction.

This is the final part of map relocation patch. The principle of map
relocation is described in commit message of 'bpf tools: Collect symbol
table from SHT_SYMTAB section'.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-18-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Create eBPF maps defined in an object file
Wang Nan [Wed, 1 Jul 2015 02:14:04 +0000 (02:14 +0000)]
bpf tools: Create eBPF maps defined in an object file

This patch creates maps based on 'map' section in object file using
bpf_create_map(), and stores the fds into an array in 'struct
bpf_object'.

Previous patches parse ELF object file and collects required data, but
doesn't play with the kernel. They belong to the 'opening' phase. This
patch is the first patch in 'loading' phase. The 'loaded' field is
introduced in 'struct bpf_object' to avoid loading an object twice,
because the loading phase clears resources collected during the opening
which becomes useless after loading. In this patch, maps_buf is cleared.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-17-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Add bpf.c/h for common bpf operations
Wang Nan [Wed, 1 Jul 2015 02:14:03 +0000 (02:14 +0000)]
bpf tools: Add bpf.c/h for common bpf operations

This patch introduces bpf.c and bpf.h, which hold common functions
issuing bpf syscall. The goal of these two files is to hide syscall
completely from user. Note that bpf.c and bpf.h deal with kernel
interface only. Things like structure of 'map' section in the ELF object
is not cared by of bpf.[ch].

We first introduce bpf_create_map().

Note that, since functions in bpf.[ch] are wrapper of sys_bpf, they
don't use OO style naming.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-16-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Record map accessing instructions for each program
Wang Nan [Wed, 1 Jul 2015 02:14:02 +0000 (02:14 +0000)]
bpf tools: Record map accessing instructions for each program

This patch records the indices of instructions which are needed to be
relocated. That information is saved in the 'reloc_desc' field in
'struct bpf_program'. In the loading phase (this patch takes effect in
the opening phase), the collected instructions will be replaced by map
loading instructions.

Since we are going to close the ELF file and clear all data at the end
of the 'opening' phase, the ELF information will no longer be valid in
the 'loading' phase. We have to locate the instructions before maps are
loaded, instead of directly modifying the instruction.

'struct bpf_map_def' is introduced in this patch to let us know how many
maps are defined in the object.

This is the third part of map relocation. The principle of map relocation
is described in commit message of 'bpf tools: Collect symbol table from
SHT_SYMTAB section'.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-15-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Collect relocation sections from SHT_REL sections
Wang Nan [Wed, 1 Jul 2015 02:14:01 +0000 (02:14 +0000)]
bpf tools: Collect relocation sections from SHT_REL sections

This patch collects relocation sections into 'struct object'.  Such
sections are used for connecting maps to bpf programs. 'reloc' field in
'struct bpf_object' is introduced for storing such information.

This patch simply store the data into 'reloc' field. Following patch
will parse them to know the exact instructions which are needed to be
relocated.

Note that the collected data will be invalid after ELF object file is
closed.

This is the second patch related to map relocation. The first one is
'bpf tools: Collect symbol table from SHT_SYMTAB section'. The
principle of map relocation is described in its commit message.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-14-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Collect eBPF programs from their own sections
Wang Nan [Wed, 1 Jul 2015 02:14:00 +0000 (02:14 +0000)]
bpf tools: Collect eBPF programs from their own sections

This patch collects all programs in an object file into an array of
'struct bpf_program' for further processing. That structure is for
representing each eBPF program. 'bpf_prog' should be a better name, but
it has been used by linux/filter.h. Although it is a kernel space name,
I still prefer to call it 'bpf_program' to prevent possible confusion.

bpf_object__add_program() creates a new 'struct bpf_program' object.
It first init a variable in stack using bpf_program__init(), then if
success, enlarges obj->programs array and copy the new object in.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-13-git-send-email-wangnan0@huawei.com
[ Made bpf_object__add_program() propagate the error (-EINVAL or -ENOMEM) ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Collect symbol table from SHT_SYMTAB section
Wang Nan [Wed, 1 Jul 2015 02:13:59 +0000 (02:13 +0000)]
bpf tools: Collect symbol table from SHT_SYMTAB section

This patch collects symbols section. This section is useful when linking
BPF maps.

What 'bpf_map_xxx()' functions actually require are map's file
descriptors (and the internal verifier converts fds into pointers to
'struct bpf_map'), which we don't know when compiling. Therefore, we
should make compiler generate a 'ldr_64 r1, <imm>' instruction, and
fill the 'imm' field with the actual file descriptor when loading in
libbpf.

BPF programs should be written in this way:

 struct bpf_map_def SEC("maps") my_map = {
    .type = BPF_MAP_TYPE_HASH,
    .key_size = sizeof(unsigned long),
    .value_size = sizeof(unsigned long),
    .max_entries = 1000000,
 };

 SEC("my_func=sys_write")
 int my_func(void *ctx)
 {
     ...
     bpf_map_update_elem(&my_map, &key, &value, BPF_ANY);
     ...
 }

Compiler should convert '&my_map' into a 'ldr_64, r1, <imm>'
instruction, where imm should be the address of 'my_map'. According to
the address, libbpf knows which map it actually referenced, and then
fills the imm field with the 'fd' of that map created by it.

However, since we never really 'link' the object file, the imm field is
only a record in relocation section. Therefore libbpf should do the
relocation:

 1. In relocation section (type == SHT_REL), positions of each such
    'ldr_64' instruction are recorded with a reference of an entry in
    symbol table (SHT_SYMTAB);

 2. From records in symbol table we can find the indics of map
    variables.

Libbpf first record SHT_SYMTAB and positions of each instruction which
required bu such operation. Then create file descriptor. Finally, after
map creation complete, replace the imm field.

This is the first patch of BPF map related stuff. It records SHT_SYMTAB
into object's efile field for further use.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-12-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Collect map definitions from 'maps' section
Wang Nan [Wed, 1 Jul 2015 02:13:58 +0000 (02:13 +0000)]
bpf tools: Collect map definitions from 'maps' section

If maps are used by eBPF programs, corresponding object file(s) should
contain a section named 'map'. Which contains map definitions. This
patch copies the data of the whole section. Map data parsing should be
acted just before map loading.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-11-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Collect version and license from ELF sections
Wang Nan [Wed, 1 Jul 2015 02:13:57 +0000 (02:13 +0000)]
bpf tools: Collect version and license from ELF sections

Expand bpf_obj_elf_collect() to collect license and kernel version
information in eBPF object file. eBPF object file should have a section
named 'license', which contains a string. It should also have a section
named 'version', contains a u32 LINUX_VERSION_CODE.

bpf_obj_validate() is introduced to validate object file after loaded.
Currently it only check existence of 'version' section.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-10-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Iterate over ELF sections to collect information
Wang Nan [Wed, 1 Jul 2015 02:13:56 +0000 (02:13 +0000)]
bpf tools: Iterate over ELF sections to collect information

bpf_obj_elf_collect() is introduced to iterate over each elf sections to
collection information in eBPF object files. This function will futher
enhanced to collect license, kernel version, programs, configs and map
information.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-9-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Check endianness and make libbpf fail early
Wang Nan [Wed, 1 Jul 2015 02:13:55 +0000 (02:13 +0000)]
bpf tools: Check endianness and make libbpf fail early

Check endianness according to EHDR. Code is taken from
tools/perf/util/symbol-elf.c.

Libbpf doesn't magically convert missmatched endianness. Even if we swap
eBPF instructions to correct byte order, we are unable to deal with
endianness in code logical generated by LLVM.

Therefore, libbpf should simply reject missmatched ELF object, and let
LLVM to create good code.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-8-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Read eBPF object from buffer
Wang Nan [Wed, 1 Jul 2015 02:13:54 +0000 (02:13 +0000)]
bpf tools: Read eBPF object from buffer

To support dynamic compiling, this patch allows caller to pass a
in-memory buffer to libbpf by bpf_object__open_buffer(). libbpf calls
elf_memory() to open it as ELF object file.

Because __bpf_object__open() collects all required data and won't need
that buffer anymore, libbpf uses that buffer directly instead of clone a
new buffer. Caller of libbpf can free that buffer or use it do other
things after bpf_object__open_buffer() return.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-7-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Open eBPF object file and do basic validation
Wang Nan [Wed, 1 Jul 2015 02:13:53 +0000 (02:13 +0000)]
bpf tools: Open eBPF object file and do basic validation

This patch defines basic interface of libbpf. 'struct bpf_object' will
be the handler of each object file. Its internal structure is hide to
user. eBPF object files are compiled by LLVM as ELF format. In this
patch, libelf is used to open those files, read EHDR and do basic
validation according to e_type and e_machine.

All elf related staffs are grouped together and reside in efile field of
'struct bpf_object'. bpf_object__elf_finish() is introduced to clear it.

After all eBPF programs in an object file are loaded, related ELF
information is useless. Close the object file and free those memory.

The zfree() and zclose() functions are introduced to ensure setting NULL
pointers and negative file descriptors after resources are released.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-6-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Allow caller to set printing function
Wang Nan [Wed, 1 Jul 2015 02:13:52 +0000 (02:13 +0000)]
bpf tools: Allow caller to set printing function

By libbpf_set_print(), users of libbpf are allowed to register he/she
own debug, info and warning printing functions. Libbpf will use those
functions to print messages. If not provided, default info and warning
printing functions are fprintf(stderr, ...); default debug printing
is NULL.

This API is designed to be used by perf, enables it to register its own
logging functions to make all logs uniform, instead of separated
logging level control.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-5-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf tools: Introduce 'bpf' library and add bpf feature check
Wang Nan [Wed, 1 Jul 2015 02:13:51 +0000 (02:13 +0000)]
bpf tools: Introduce 'bpf' library and add bpf feature check

This is the first patch of libbpf. The goal of libbpf is to create a
standard way for accessing eBPF object files. This patch creates
'Makefile' and 'Build' for it, allows 'make' to build libbpf.a and
libbpf.so, 'make install' to put them into proper directories.
Most part of Makefile is borrowed from traceevent.

Before building, it checks the existence of libelf in Makefile, and deny
to build if not found. Instead of throwing an error if libelf not found,
the error raises in a phony target "elfdep". This design is to ensure
'make clean' still workable even if libelf is not found.

Because libbpf requires 'kern_version' field set for 'union bpf_attr'
(bpfdep" is used for that dependency), Kernel BPF API is also checked
by intruducing a new feature check 'bpf' into tools/build/feature,
which checks the existence and version of linux/bpf.h. When building
libbpf, it searches that file from include/uapi/linux in kernel source
tree (controlled by FEATURE_CHECK_CFLAGS-bpf). Since it searches kernel
source tree it reside, installing of newest kernel headers is not
required, except we are trying to port these files to an old kernel.

To avoid checking that file when perf building, the newly introduced
'bpf' feature check doesn't added into FEATURE_TESTS and
FEATURE_DISPLAY by default in tools/build/Makefile.feature, but added
into libbpf's specific.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Bcc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-4-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoMerge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Fri, 7 Aug 2015 07:11:30 +0000 (09:11 +0200)]
Merge tag 'perf-core-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

User visible changes:

  - IPC and cycle accounting in 'perf annotate'. (Andi Kleen)

  - Display cycles in branch sort mode in 'perf report'. (Andi Kleen)

  - Add total time column to 'perf trace' syscall stats summary. (Milian Woff)

Infrastructure changes:

  - PMU helpers to use in Intel PT. (Adrian Hunter)

  - Fix perf-with-kcore script not to split args with spaces. (Adrian Hunter)

  - Add empty Build files for some more architectures. (Ben Hutchings)

  - Move 'perf stat' config variables to a struct to allow using some
    of its functions in more places. (Jiri Olsa)

  - Add DWARF register names for 'xtensa' arch. (Max Filippov)

  - Implement BPF programs attached to uprobes. (Wang Nan)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf tools: Extend the event parser maximum error index
Adrian Hunter [Fri, 17 Jul 2015 16:33:51 +0000 (19:33 +0300)]
perf tools: Extend the event parser maximum error index

Extend the event parser maximum error index from 10 to 13.  That allows
PMU config terms of up to 10 characters to display un-truncated in the
error message.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-17-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Validate config term maximum value
Adrian Hunter [Fri, 17 Jul 2015 16:33:50 +0000 (19:33 +0300)]
perf tools: Validate config term maximum value

Currently the value of a PMU config term is silently truncated if it is
too big. This is an impediment to validating the value for other
criteria later on i.e.  the user provides an invalid value that gets
truncated to a valid one.

The maximum value validation is only done for the parser where the error
is passed back to the user. In other cases the silent truncation
continues so as not to affect tools that perhaps rely on it.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-16-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Add perf_pmu__format_bits()
Adrian Hunter [Fri, 17 Jul 2015 16:33:49 +0000 (19:33 +0300)]
perf tools: Add perf_pmu__format_bits()

Add perf_pmu__format_bits() to get the format bits for a PMU config
term.  Intel PT will use this to validate terms and to record format
bits to enable later interpreting the config from the attribute stored
in the perf.data file.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-15-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Fix perf-with-kcore handling of arguments containing spaces
Adrian Hunter [Fri, 17 Jul 2015 16:33:47 +0000 (19:33 +0300)]
perf tools: Fix perf-with-kcore handling of arguments containing spaces

Fix the perf-with-kcore script so that it doesn't split arguments that
contain spaces.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-13-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf auxtrace: Fix period type 'i' not working
Adrian Hunter [Fri, 17 Jul 2015 16:33:46 +0000 (19:33 +0300)]
perf auxtrace: Fix period type 'i' not working

PERF_ITRACE_PERIOD_INSTRUCTIONS is zero so it got overwritten by the
default period type.

Fix by checking if the period type was set rather than if the value was
zero when applying the default.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Link: http://lkml.kernel.org/r/1437150840-31811-12-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools xtensa: Add DWARF register names
Max Filippov [Sat, 18 Jul 2015 08:30:11 +0000 (11:30 +0300)]
perf tools xtensa: Add DWARF register names

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Marc Gauthier <marc@cadence.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: linux-xtensa@linux-xtensa.org
Link: http://lkml.kernel.org/r/1437208216-15729-9-git-send-email-jcmvbkbc@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf report: Display cycles in branch sort mode
Andi Kleen [Sat, 18 Jul 2015 15:24:53 +0000 (08:24 -0700)]
perf report: Display cycles in branch sort mode

Display the cycles by default in branch sort mode.

To make enough room for the new column I removed dso_to. It is usually
redundant with dso_from.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-9-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf top: Add branch annotation code to top
Andi Kleen [Sat, 18 Jul 2015 15:24:52 +0000 (08:24 -0700)]
perf top: Add branch annotation code to top

Now that we can process branch data in annotate it makes sense to
support enabling branch recording from top too. Most of the code needed
for this is already in shared code with report. But we need to add:

- The option parsing code (using shared code from the previous patch)
- Document the options
- Set up the IPC/cycles accounting state in the top session
- Call the accounting code in the hist iter callback

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-8-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf annotate: Finally display IPC and cycle accounting
Andi Kleen [Sat, 18 Jul 2015 15:24:51 +0000 (08:24 -0700)]
perf annotate: Finally display IPC and cycle accounting

Add two new columns to the annotate display and display the average
cycles and the compute IPC if available.

When the LBR was not in any branch mode the IPC computation is
automatically disabled. We still display the cycle information.

Example output (with made up numbers):

The second column is the IPC and third average cycles.

                 │    __attribute__((noinline)) f2()
                 │    {
  5.15  0.07     │       push   %rbp
  0.01  0.07     │       mov    %rsp,%rbp
                 │            c = a / b;
  9.87  0.07     │       mov    a,%eax
        0.07     │       mov    b,%ecx
        0.07     │       cltd
  4.92  0.07  123│       idiv   %ecx
 70.79  0.07     │       mov    %eax,__TMC_END__
                 │    }
  9.25  0.07     │       pop    %rbp
  0.01  0.07  123│     ← retq

v2: Fix display problems.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-7-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf annotate: Compute IPC and basic block cycles
Andi Kleen [Sat, 18 Jul 2015 15:24:50 +0000 (08:24 -0700)]
perf annotate: Compute IPC and basic block cycles

Compute the IPC and the basic block cycles for the annotate display.

IPC is computed by counting the instructions, and then dividing the
accounted cycles by that count.

The actual IPC computation can only be done at annotate time, because we
need to parse the objdump output first to know the number of
instructions in the basic block.

The cycles/IPC are also put into the perf function annotation so that
the display code can show them.

Again basic block overlaps are not handled, with the longest winning,
but there are some heuristics to hide the IPC when the longest is not
the most common.

v2: Compute IPC correctly.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-6-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf report: Add processing for cycle histograms
Andi Kleen [Sat, 18 Jul 2015 15:24:49 +0000 (08:24 -0700)]
perf report: Add processing for cycle histograms

Call the earlier added cycle histogram infrastructure from the perf
report hist iter callback. For this we walk the branch records.

This allows to use cycle histograms when browsing perf report annotate.

v2: Rename flag

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-5-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf report: Add infrastructure for a cycles histogram
Andi Kleen [Sat, 18 Jul 2015 15:24:48 +0000 (08:24 -0700)]
perf report: Add infrastructure for a cycles histogram

This adds the basic infrastructure to keep track of cycle counts per
basic block for annotate. We allocate an array similar to the normal
accounting, and then account branch cycles there.

We handle two cases:

cycles per basic block with start and cycles per branch (these are later
used for either IPC or just cycles per BB)

In the start case we cannot handle overlaps, so always the longest basic
block wins.

For the cycles per branch case everything is accurately accounted.

v2: Remove unnecessary checks. Slight restructure. Move
symbol__get_annotation to another patch. Move histogram allocation.
v3: Merged with current tree

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-4-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf report: Add flag for non ANY branch mode
Andi Kleen [Sat, 18 Jul 2015 15:24:47 +0000 (08:24 -0700)]
perf report: Add flag for non ANY branch mode

Later patches need to cheaply check that the branch mode is in ANY.  Add
a new function to check all event attrs and add a flag to the report
state, which is then initialized.

v2: Rename flag

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Add support for cycles, weight branch_info field
Andi Kleen [Sat, 18 Jul 2015 15:24:46 +0000 (08:24 -0700)]
perf tools: Add support for cycles, weight branch_info field

cycles is a new branch_info field available on some CPUs that indicates
the time deltas between branches in the LBR.

Add a sort key and output code for the cycles to allow to display the
basic block cycles individually in perf report.

We also pass in the cycles for weight when LBRs are processed, which
allows to get global and local weight, to get an estimate of the total
cost.

And also print the cycles information for perf report -D.  I also added
printing for the previously missing LBR flags (mispredict etc.)

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1437233094-12844-2-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Add empty Build files for architectures lacking them
Ben Hutchings [Tue, 4 Aug 2015 16:10:27 +0000 (17:10 +0100)]
perf tools: Add empty Build files for architectures lacking them

perf currently fails to build on MIPS as there is no
tools/perf/arch/mips/Build file.  Adding an empty file fixes this as
there are no MIPS-specific sources to build.

It looks like the same is needed for Alpha and PA-RISC, though I
haven't been able to test those.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Fixes: 5e8c0fb6a957 ("perf build: Add arch x86 objects building")
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1438704627.7315.2.camel@decadent.org.uk
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Move counter processing code into stat object
Jiri Olsa [Tue, 21 Jul 2015 12:31:27 +0000 (14:31 +0200)]
perf stat: Move counter processing code into stat object

Moving counter processing code into stat object as
perf_stat__process_counter.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-8-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Pass 'struct perf_stat_config' into process_counter()
Jiri Olsa [Tue, 21 Jul 2015 12:31:26 +0000 (14:31 +0200)]
perf stat: Pass 'struct perf_stat_config' into process_counter()

Passing 'struct perf_stat_config' into process_counter(), so that we can
make process_counter() non static and use it from other places.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-7-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Move 'interval' into struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:25 +0000 (14:31 +0200)]
perf stat: Move 'interval' into struct perf_stat_config

Moving 'interval' into struct perf_stat_config. The point is to
centralize the base stat config so it could be used localy together with
other stat routines in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-6-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Move 'output' into struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:24 +0000 (14:31 +0200)]
perf stat: Move 'output' into struct perf_stat_config

Moving 'output' into struct perf_stat_config. The point is to centralize
the base stat config so it could be used localy together with other stat
routines in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-5-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Move 'scale' into struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:23 +0000 (14:31 +0200)]
perf stat: Move 'scale' into struct perf_stat_config

Moving 'scale' into struct perf_stat_config. The point is to centralize
the base stat config so it could be used localy together with other stat
routines in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-4-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf stat: Introduce struct perf_stat_config
Jiri Olsa [Tue, 21 Jul 2015 12:31:22 +0000 (14:31 +0200)]
perf stat: Introduce struct perf_stat_config

Moving 'aggr_mode' into new struct. The point is to centralize the base
stat config so it could be used localy together with other stat routines
in other parts of perf code.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1437481927-29538-3-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Add missing forward declaration of struct map to probe-event.h
Wang Nan [Fri, 19 Jun 2015 08:42:48 +0000 (08:42 +0000)]
perf tools: Add missing forward declaration of struct map to probe-event.h

Commit 7b6ff0bdbf4f7f429c2116cca92a6d171217449e ("perf probe ppc64le:
Fixup function entry if using kallsyms lookup") adds 'struct map' into
probe-event.h but not forward declares it. This patch fixes it.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Fixes: 7b6ff0bdbf4f ("perf probe ppc64le: Fixup function entry if using kallsyms lookup")
Link: http://lkml.kernel.org/n/1436445342-1402-30-git-send-email-wangnan0@huawei.com
[ No need to include map.h, just forward declare 'struct map' ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Introduce veprintf
Wang Nan [Fri, 31 Jul 2015 13:35:33 +0000 (10:35 -0300)]
perf tools: Introduce veprintf

va_args alternative to eprintf().

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/n/1436445342-1402-19-git-send-email-wangnan0@huawei.com
[ split from another patch ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agotracing, perf: Implement BPF programs attached to uprobes
Wang Nan [Wed, 1 Jul 2015 02:13:50 +0000 (02:13 +0000)]
tracing, perf: Implement BPF programs attached to uprobes

By copying BPF related operation to uprobe processing path, this patch
allow users attach BPF programs to uprobes like what they are already
doing on kprobes.

After this patch, users are allowed to use PERF_EVENT_IOC_SET_BPF on a
uprobe perf event. Which make it possible to profile user space programs
and kernel events together using BPF.

Because of this patch, CONFIG_BPF_EVENTS should be selected by
CONFIG_UPROBE_EVENT to ensure trace_call_bpf() is compiled even if
KPROBE_EVENT is not set.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-3-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agobpf: Use correct #ifdef controller for trace_call_bpf()
Wang Nan [Wed, 1 Jul 2015 02:13:49 +0000 (02:13 +0000)]
bpf: Use correct #ifdef controller for trace_call_bpf()

Commit e1abf2cc8d5d80b41c4419368ec743ccadbb131e ("bpf: Fix the build on
BPF_SYSCALL=y && !CONFIG_TRACING kernels, make it more configurable")
updated the building condition of bpf_trace.o from CONFIG_BPF_SYSCALL
to CONFIG_BPF_EVENTS, but the corresponding #ifdef controller in
trace_events.h for trace_call_bpf() was not changed. Which, in theory,
is incorrect.

With current Kconfigs, we can create a .config with CONFIG_BPF_SYSCALL=y
and CONFIG_BPF_EVENTS=n by unselecting CONFIG_KPROBE_EVENT and
selecting CONFIG_BPF_SYSCALL. With these options, trace_call_bpf() will
be defined as an extern function, but if anyone calls it a symbol missing
error will be triggered since bpf_trace.o was not built.

This patch changes the #ifdef controller for trace_call_bpf() from
CONFIG_BPF_SYSCALL to CONFIG_BPF_EVENTS. I'll show its correctness:

Before this patch:

   BPF_SYSCALL   BPF_EVENTS   trace_call_bpf   bpf_trace.o
   y             y           normal           compiled
   n             n           inline           not compiled
   y             n           normal           not compiled (incorrect)
   n             y          impossible (BPF_EVENTS depends on BPF_SYSCALL)

After this patch:

   BPF_SYSCALL   BPF_EVENTS   trace_call_bpf   bpf_trace.o
   y             y           normal           compiled
   n             n           inline           not compiled
   y             n           inline           not compiled (fixed)
   n             y          impossible (BPF_EVENTS depends on BPF_SYSCALL)

So this patch doesn't break anything. QED.

Signed-off-by: Wang Nan <wangnan0@huawei.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: He Kuang <hekuang@huawei.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Kaixu Xia <xiakaixu@huawei.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Zefan Li <lizefan@huawei.com>
Cc: pi3orama@163.com
Link: http://lkml.kernel.org/r/1435716878-189507-2-git-send-email-wangnan0@huawei.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf trace: Add total time column to summary.
Milian Wolff [Thu, 6 Aug 2015 09:24:29 +0000 (11:24 +0200)]
perf trace: Add total time column to summary.

It is cumbersome to manually calculate the total time spent in a given
syscall by multiplying the average value with the number of calls.

Instead, we now do this directly inside perf trace.

Note that this is also done by 'strace', which even adds a column with
relative numbers - something we could do in the future.

Example:

  perf trace -s find /some/folder > /dev/null

   Summary of events:

   find (19976), 700123 events, 100.0%, 0.000 msec

     syscall            calls    total       min       avg       max      stddev
                                 (msec)    (msec)    (msec)    (msec)        (%)
     --------------- -------- --------- --------- --------- ---------     ------
     read                   4     0.006     0.001     0.002     0.003     27.42%
     write               8046     9.617     0.001     0.001     0.035      0.56%
     open               34196    40.384     0.001     0.001     0.071      0.30%
     close              68375    57.104     0.001     0.001     0.076      0.25%
     stat                   4     0.004     0.001     0.001     0.001      3.14%
     fstat              34189    27.518     0.001     0.001     0.060      0.34%
     mmap                  13     0.029     0.001     0.002     0.003     10.74%
     mprotect               6     0.018     0.002     0.003     0.005     17.04%
     munmap                 3     0.014     0.003     0.005     0.006     24.87%
     brk                   87     0.490     0.001     0.006     0.016      6.50%
     ioctl                  3     0.004     0.001     0.001     0.003     36.39%
     access                 1     0.004     0.004     0.004     0.004      0.00%
     uname                  1     0.001     0.001     0.001     0.001      0.00%
     getdents           68393   143.600     0.001     0.002     0.187      0.95%
     fchdir             68371    56.980     0.001     0.001     0.111      0.39%
     arch_prctl             1     0.001     0.001     0.001     0.001      0.00%
     openat             34184    41.737     0.001     0.001     0.102      0.41%
     newfstatat         34184    41.180     0.001     0.001     0.064      0.34%

Signed-off-by: Milian Wolff <milian.wolff@kdab.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
LPU-Reference: 1438853069-5902-1-git-send-email-milian.wolff@kdab.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoMerge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Thu, 6 Aug 2015 06:51:18 +0000 (08:51 +0200)]
Merge tag 'perf-core-for-mingo' of git://git./linux/kernel/git/acme/linux into perf/core

Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:

New features:

  - Deref sys_enter pointer args with contents from probe:vfs_getname, showing
    pathnames instead of pointers in many syscalls in 'perf trace'. (Arnaldo Carvalho de Melo)

  - Make 'perf trace' write to stderr by default, just like 'strace'. (Milian Woff)

Infrastructure changes:

  - color_vfprintf() fixes. (Andi Kleen, Jiri Olsa)

  - Allow enabling/disabling PERF_SAMPLE_TIME per event. (Kan Liang)

  - Fix build errors with mipsel-linux-uclibc compiler. (Petri Gynther)

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf tools: Fix build errors with mipsel-linux-uclibc compiler
Petri Gynther [Wed, 5 Aug 2015 00:38:01 +0000 (17:38 -0700)]
perf tools: Fix build errors with mipsel-linux-uclibc compiler

linux/tools$ make ARCH=mips CROSS_COMPILE=mipsel-linux- perf
...
config/Makefile:256: *** No gnu/libc-version.h found, please install
glibc-dev[el].  Stop.
make[1]: *** [all] Error 2
make: *** [perf] Error 2

...
In file included from builtin-sched.c:13:0:
util/cloexec.h:8:12: error: redundant redeclaration of ‘sched_getcpu’
 [-Werror=redundant-decls]
 extern int sched_getcpu(void) __THROW;

mipsel-buildroot-linux-uclibc/sysroot/usr/include/bits/sched.h:88:12:
 note: previous declaration of ‘sched_getcpu’ was here
 extern int sched_getcpu (void) __THROW;

uclibc info:
sysroot/usr/include/bits/uClibc_config.h
__UCLIBC_MAJOR__ 0
__UCLIBC_MINOR__ 9
__UCLIBC_SUBLEVEL__ 33

sysroot/usr/include/features.h
__UCLIBC__ 1
__GLIBC__ 2
__GLIBC_MINOR__ 2

Signed-off-by: Petri Gynther <pgynther@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1438735081-24131-1-git-send-email-pgynther@google.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf trace: Write to stderr by default
Milian Wolff [Wed, 5 Aug 2015 19:52:23 +0000 (16:52 -0300)]
perf trace: Write to stderr by default

Without this patch, it is cumbersome to read the trace output but
ignoring the normal, potentially verbose, output of the debuggee.  One
common example is doing something like the following:

 perf trace -s find /tmp > /dev/null

Without this patch, the trace summary will be lost. Now, it will still
be printed at the end. This behavior is also applied by strace.

Cc: Milian Wolff <milian.wolff@kdab.com>
Cc: David Ahern <dsahern@gmail.com>
Link: http://lkml.kernel.org/n/tip-tqnks6y2cnvm5f9g2dsfr7zl@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Do not include escape sequences in color_vfprintf return
Andi Kleen [Tue, 4 Aug 2015 00:50:02 +0000 (17:50 -0700)]
perf tools: Do not include escape sequences in color_vfprintf return

color_vprintf was including the length of the invisible escape sequences
in its return argument. Don't include them to make the return value
usable for indentation calculations.

v2: Add comment, rebase

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1438649408-20807-3-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Remove trail argument to color vsprintf
Jiri Olsa [Tue, 4 Aug 2015 00:50:01 +0000 (17:50 -0700)]
perf tools: Remove trail argument to color vsprintf

Seems like it's always '\n' through color_fprintf_ln, which is not used
at all, removing.. ;-)

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1438649408-20807-2-git-send-email-andi@firstfloor.org
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Refine parse/config callchain functions
Kan Liang [Tue, 4 Aug 2015 08:30:20 +0000 (04:30 -0400)]
perf tools: Refine parse/config callchain functions

Pass global callchain_param into parse_callchain_record_opt and
perf_evsel__config_callgraph as parameter. So we can reuse these
functions to parse/config local param for callchain.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438677022-34296-3-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf tools: Per-event time support
Kan Liang [Tue, 4 Aug 2015 08:30:19 +0000 (04:30 -0400)]
perf tools: Per-event time support

This patchkit adds the ability to turn off time stamps per event.

One usaful case for partial time is to work with per-event callgraph to
enable "PEBS threshold > 1" (https://lkml.org/lkml/2015/5/10/196), which
can significantly reduce the sampling overhead.

The event samples with time stamps off will not be ordered.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/r/1438677022-34296-2-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf trace: Use vfs_getname syscall arg beautifier in more syscalls
Arnaldo Carvalho de Melo [Wed, 5 Aug 2015 02:31:25 +0000 (23:31 -0300)]
perf trace: Use vfs_getname syscall arg beautifier in more syscalls

Those were covered and tested in this cset:

 access, chdir, chmod, chown, chroot, creat, getxattr,
 inotify_add_watch, lchown, lgetxattr, listxattr,
 lsetxattr, mkdir, mkdirat, mknod, rmdir, faccessat,
 newfstatat, openat, readlink, readlinkat, removexattr,
 setxattr, statfs, swapon, swapoff, truncate, unlinkat,
 utime, utimes, utimensat.

E.g.:

  # trace -e statfs,access,mkdir mkdir /tmp/bla
   0.285 (0.020 ms): mkdir/2799 access(filename: /etc/ld.so.preload, mode: R         ) = -1 ENOENT No such file or directory
   1.070 (0.032 ms): mkdir/2799 statfs(pathname: /sys/fs/selinux, buf: 0x7ffeafbdc930) = 0
   1.087 (0.013 ms): mkdir/2799 statfs(pathname: /sys/fs/selinux, buf: 0x7ffeafbdc820) = 0
   1.189 (0.014 ms): mkdir/2799 access(filename: /etc/selinux/config                 ) = 0
   1.905 (0.610 ms): mkdir/2799 mkdir(pathname: /tmp/bla, mode: 511                  ) = 0
  #

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-wbqtnlktquun3wtpjdz3okul@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
  and an empty message aborts the commit.

9 years agoperf trace: Deref sys_enter pointer args with contents from probe:vfs_getname
Arnaldo Carvalho de Melo [Wed, 5 Aug 2015 01:30:09 +0000 (22:30 -0300)]
perf trace: Deref sys_enter pointer args with contents from probe:vfs_getname

To work like strace and dereference syscall pointer args we need to
insert probes (or tracepoints) right after we copy those bytes from
userspace.

Since we're formatting the syscall args at raw_syscalls:sys_enter time,
we need to have a formatter that just stores the position where, later,
when we get the probe:vfs_getname, we can insert the pointer contents.

Now, if a probe:vfs_getname with this format is in place:

 # perf probe -l
  probe:vfs_getname (on getname_flags:72@/home/git/linux/fs/namei.c with pathname)

That was, in this case, put in place with:

 # perf probe 'vfs_getname=getname_flags:72 pathname=filename:string'
 Added new event:
  probe:vfs_getname    (on getname_flags:72 with pathname=filename:string)

 You can now use it in all perf tools, such as:

perf record -e probe:vfs_getname -aR sleep 1
 #

Then 'perf trace' will notice that and do the pointer -> contents
expansion:

 # trace -e open touch /tmp/bla
  0.165 (0.010 ms): touch/17752 open(filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
  0.195 (0.011 ms): touch/17752 open(filename: /lib64/libc.so.6, flags: CLOEXEC) = 3
  0.512 (0.012 ms): touch/17752 open(filename: /usr/lib/locale/locale-archive, flags: CLOEXEC) = 3
  0.582 (0.012 ms): touch/17752 open(filename: /tmp/bla, flags: CREAT|NOCTTY|NONBLOCK|WRONLY, mode: 438) = 3
 #

Roughly equivalent to strace's output:

 # strace -rT -e open touch /tmp/bla
  0.000000 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 <0.000039>
  0.000317 open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 <0.000102>
  0.001461 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 <0.000072>
  0.000405 open("/tmp/bla", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = 3 <0.000055>
  0.000641 +++ exited with 0 +++
 #

Now we need to either look for at all syscalls that are marked as
pointers and have some well known names ("filename", "pathname", etc)
and set the arg formatter to the one used for the "open" syscall in this
patch.

This implementation works for syscalls with just a string being copied
from userspace, for matching syscalls with more than one string being
copied via the same probe/trace point (vfs_getname) we need to extend
the vfs_getname probe spec to include the pointer too, but there are
some problems with that in 'perf probe' or the kernel kprobes code, need
to investigate before considering supporting multiple strings per
syscall.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xvuwx6nuj8cf389kf9s2ue2s@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf trace: Use a constant for the syscall formatting buffer
Arnaldo Carvalho de Melo [Wed, 5 Aug 2015 01:17:29 +0000 (22:17 -0300)]
perf trace: Use a constant for the syscall formatting buffer

We were using it as a magic number, 1024, fix that.

Eventually we need to stop doing it per line, and do it per
arg, traversing the args at output time, to avoid the memmove()
calls that will be used in the next cset to replace pointers
present at raw_syscalls:sys_enter time with its contents that
appear at probe:vfs_getname time, before raw_syscalls:sys_exit
time.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-4sz3wid39egay1pp8qmbur4u@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf trace: Remember if the vfs_getname tracepoint/kprobe is in place
Arnaldo Carvalho de Melo [Tue, 4 Aug 2015 20:01:04 +0000 (17:01 -0300)]
perf trace: Remember if the vfs_getname tracepoint/kprobe is in place

So that we can later decide if we will store where to expand the
pathname once we are handling vfs_getname or if we should instead
just go on and straight away print the pointer.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ytxk5s5jpc50wahffmlxgxuw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf trace: Do not show syscall tracepoint filter in the --no-syscalls case
Arnaldo Carvalho de Melo [Mon, 3 Aug 2015 20:12:29 +0000 (17:12 -0300)]
perf trace: Do not show syscall tracepoint filter in the --no-syscalls case

We were accessing trace->syscalls.events members even when that struct
wasn't initialized, i.e. --no-syscalls was specified on the command
line, fix it to show that, still in debug mode, when we have an event
qualifier list, i.e. when we actually are doing subset syscall tracing.

Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Milian Wolff <mail@milianw.de>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Fixes: 19867b6186f3 ("perf trace: Use event filters for the event qualifier list")
Link: http://lkml.kernel.org/n/tip-7980ym6vujgh3yiai0cqzc88@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf script: No tracepoints? Don't call libtraceevent.
Arnaldo Carvalho de Melo [Mon, 3 Aug 2015 19:27:40 +0000 (16:27 -0300)]
perf script: No tracepoints? Don't call libtraceevent.

The libtraceevent handler (session->tevent) is only initialized when
there are tracepoints in a perf.data event list, so do not call
pevent_set_function_resolve() in those cases, fixing a segfault.

Reported-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Jiri Olsa <jolsa@kernel.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xyynkucl5p4bcs13zi4i4b1f@git.kernel.org
link: http://lkml.kernel.org/r/20150803174113.GA20282@krava.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
9 years agoperf/x86/intel/pebs: Robustify PEBS buffer drain
Peter Zijlstra [Wed, 15 Jul 2015 12:35:46 +0000 (14:35 +0200)]
perf/x86/intel/pebs: Robustify PEBS buffer drain

Vince Weaver and Stephane Eranian reported warnings in the PEBS
code when running the perf fuzzer. Stephane wrote:

  > I can reproduce the problem on my HSW running the fuzzer.
  >
  > I can see why this could be happening if you are mixing PEBS and non PEBS events
  > in the bottom 4 counters. I suspect:
  >         for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) {
  >                 if ((counts[bit] == 0) && (error[bit] == 0))
  >                         continue;
  >
  > This test is not correct when you have non-PEBS events mixed with
  > PEBS events and they overflow at the same time. They will have
  > counts[i] != 0 but error[i] == 0, and thus you fall thru the loop
  > and hit the assert. Or it is something along those lines.

The only way I can make this work is if ->status only has !PEBS events
set, because if it has both set we'll take that slow path which masks
out the !PEBS bits.

After masking there are 3 options:

 - there is one bit set, and its @bit, we increment counts[bit].

 - there are multiple bits set, we increment error[] for each set bit,
   we do not increment counts[].

 - there are no bits set, we do nothing.

The intent was to never increment counts[] for !PEBS events.

Now if we start out with only a single !PEBS event set, we'll pass the
test and increment counts[] for a !PEBS and hit the warn.

Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Reported-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/pebs: Fix event disable PEBS buffer drain
Liang, Kan [Fri, 3 Jul 2015 20:08:27 +0000 (20:08 +0000)]
perf/x86/intel/pebs: Fix event disable PEBS buffer drain

When disabling a PEBS event, we need to drain the buffer. Doing so
requires a correct cpuc->pebs_active mask.

The current code clears the pebs_active bit before draining the
buffer. Fix that.

Signed-off-by: "Liang, Kan" <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver<vincent.weaver@maine.edu>
Link: http://lkml.kernel.org/r/37D7C6CF3E00A74B8858931C1DB2F07701885A65@SHSMSX103.ccr.corp.intel.com
[ Fixed the SOB. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86: Add an MSR PMU driver
Andy Lutomirski [Mon, 20 Jul 2015 15:49:06 +0000 (11:49 -0400)]
perf/x86: Add an MSR PMU driver

This patch adds an MSR PMU to support free running MSR counters. Such
as time and freq related counters includes TSC, IA32_APERF, IA32_MPERF
and IA32_PPERF, but also SMI_COUNT.

The events are exposed in sysfs for use by perf stat and other tools.
The files are under /sys/devices/msr/events/

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Kan Liang <kan.liang@intel.com>
[ s/freq/msr/, added SMI_COUNT, fixed bugs. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: adrian.hunter@intel.com
Cc: dsahern@gmail.com
Cc: eranian@google.com
Cc: jolsa@kernel.org
Cc: mark.rutland@arm.com
Cc: namhyung@kernel.org
Link: http://lkml.kernel.org/r/1437407346-31186-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/uncore: Add Broadwell-DE uncore support
Kan Liang [Thu, 2 Jul 2015 12:12:52 +0000 (08:12 -0400)]
perf/x86/intel/uncore: Add Broadwell-DE uncore support

The uncore subsystem for Broadwell-DE is similar to Haswell-EP.  There
are some differences in pci device IDs, box number and constraints.

Please refer to the public document:

  http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-uncore-performance-monitoring.html

Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1435839172-15114-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel: Use 0x11 as extra reg test value
Andi Kleen [Tue, 30 Jun 2015 23:33:24 +0000 (16:33 -0700)]
perf/x86/intel: Use 0x11 as extra reg test value

The next patch adds a new perf extra register where 0x1ff is not a valid
value. Use 0x11 instead.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1435707205-6676-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86: Make merge_attr() global to use from perf_event_intel
Andi Kleen [Mon, 29 Jun 2015 21:22:13 +0000 (14:22 -0700)]
perf/x86: Make merge_attr() global to use from perf_event_intel

merge_attr() allows to merge two sysfs attribute tables.
Export it to be usable by other files too.

Next patch is going to use that to extend the sysfs format
attributes for a CPU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1435612935-24425-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/lbr: Limit LBR accesses to TOS in callstack mode
Andi Kleen [Thu, 28 May 2015 04:13:18 +0000 (21:13 -0700)]
perf/x86/intel/lbr: Limit LBR accesses to TOS in callstack mode

In callstack mode the LBR is not a ring buffer, but a stack that grows up
and down. This means in  this case we don't need to access all LBRs, only the
ones up to TOS. Do this optimization for the normal LBR read, and the context
switch save/restore code. For save/restore it can be done unconditionally, as
it only runs when call stack mode is active.

This recovers some of the cost of going to 32 LBRs on Skylake.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1432786398-23861-6-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/lbr: Use correct index to save/restore LBR_INFO with call stack
Andi Kleen [Thu, 28 May 2015 04:13:17 +0000 (21:13 -0700)]
perf/x86/intel/lbr: Use correct index to save/restore LBR_INFO with call stack

Use the correct index to save/restore the LBR_INFO_x MSR in
callstack mode. This is more a cleanup, as even with the wrong
index the register was correctly saved/restored, and also
LBR callgraph mode in perf tools do not really need anything in
LBR_INFO. But still better to use the right index.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1432786398-23861-5-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel: Add Intel Skylake PMU support
Andi Kleen [Sun, 10 May 2015 19:22:44 +0000 (12:22 -0700)]
perf/x86/intel: Add Intel Skylake PMU support

Add perf core PMU support for future Intel Skylake CPU cores.

The code is based on Haswell/Broadwell.

There is a new cache event list, based on the updated Haswell
event list.

Skylake has removed most counter constraints on basic
events, so the basic constraints table now only has a single
entry (plus the fixed counters).

TSX support and various other setups are all shared with Haswell.

Skylake has 32 LBR entries. Add a new LBR init function
to set this up. The filters are all the same as Haswell.

It also has a new LBR format with a separate LBR_INFO_* MSR,
but that has been already added earlier.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-7-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/lbr: Optimize v4 LBR unfreezing
Andi Kleen [Sun, 10 May 2015 19:22:46 +0000 (12:22 -0700)]
perf/x86/intel/lbr: Optimize v4 LBR unfreezing

In Arch perfmon v4 the GLOBAL_STATUS reset automatically unfreezes
LBRs. So no need to do it manually in the LBR code. Add a check
to skip it.

v2: Move test up to beginning of function.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-9-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel: Move PMU ACK to after LBR read
Andi Kleen [Sun, 10 May 2015 19:22:47 +0000 (12:22 -0700)]
perf/x86/intel: Move PMU ACK to after LBR read

With Arch Perfmon v4 the PMU ack unfreezes the LBRs. So we need to do
the PMU ack after the LBR reading, otherwise the LBRs would be polluted
by the PMI handler.

This is a minimal change. In principle the ACK could be moved much later.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-10-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel: Handle new arch perfmon v4 status bits
Andi Kleen [Sun, 10 May 2015 19:22:45 +0000 (12:22 -0700)]
perf/x86/intel: Handle new arch perfmon v4 status bits

ArchPerfmon v4 has some new status bits in GLOBAL_STATUS.

These need to be ignored when deciding whether a NMI
was an NMI, to avoid eating all NMIs when they
stay set, see:

    b292d7a10487 ("perf/x86/intel: ignore CondChgd bit to avoid false NMI handling")

This patch ignores the new ASIF bit, which indicates
that SGX interfered with the PMU, and also the new
LBR freezing bits, which are set when the LBRs get
frozen, plus the existing CondChange (set by JTAG
debuggers and some buggy BIOSes)

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-8-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/lbr: Add support for LBRv5
Andi Kleen [Sun, 10 May 2015 19:22:43 +0000 (12:22 -0700)]
perf/x86/intel/lbr: Add support for LBRv5

Add support for the new LBRv5 format used on Intel Skylake CPUs.

The flags for mispredict, abort, in_tx etc. moved to range of separate
LBR_INFO_* MSRs. Teach the LBR code to read those. The original
LBR registers stay the same, except they have full sign
extension now.

LBR_INFO also reports a cycle count to the last branch.
Report the cycle information using the new "cycles" branch_info
output field.

In addition we have to context switch and clear the new INFO
MSRs to avoid any information leaks.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-6-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf: Add cycles to branch_info
Andi Kleen [Sun, 10 May 2015 19:22:42 +0000 (12:22 -0700)]
perf: Add cycles to branch_info

Intel Skylake supports reporting the time in cycles a branch in the LBR
took, to give a rough indication of the basic block performance.

Export the cycle information in the branch_info structure.
This can be done by just reusing some currently zero padding.

This is just the generic header change. The architecture
still needs to fill it in.

There's no attempt to convert to real time, as we really
want cycles here.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-5-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agox86: Add new MSRs and MSR bits used for Intel Skylake PMU support
Andi Kleen [Sun, 10 May 2015 19:22:41 +0000 (12:22 -0700)]
x86: Add new MSRs and MSR bits used for Intel Skylake PMU support

Add new MSRs (LBR_INFO) and some new MSR bits used by the Intel Skylake
PMU driver.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-4-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/lbr: Allow time stamp for free running PEBSv3
Andi Kleen [Thu, 28 May 2015 04:13:14 +0000 (21:13 -0700)]
perf/x86/intel/lbr: Allow time stamp for free running PEBSv3

With PEBSv3 the PEBS record contains a time stamp. That means we can allow
free-running PEBS without a PMI even if the user program requested a time stamp.
This avoids the need to use -T to get free running PEBS, and also avoids
any problems with mis-identifying MMAPs later.

Move the free_running_flags state into a variable in x86_pmu and use it.
This only works when no explicit clock_id is set.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@kernel.org
Cc: eranian@google.com
Cc: jolsa@redhat.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1432786398-23861-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel: Add support for PEBSv3 profiling
Andi Kleen [Sun, 10 May 2015 19:22:40 +0000 (12:22 -0700)]
perf/x86/intel: Add support for PEBSv3 profiling

PEBSv3 is the same as the existing PEBSv2 used on Haswell,
but it adds a new TSC field. Add support to the generic
PEBS handler to handle the new format, and overwrite
the perf time stamp using the new native_sched_clock_from_tsc().

Right now the time stamp is just slightly more accurate,
as it is nearer the actual event trigger point. With
the PEBS threshold > 1 patchkit it will be much more accurate,
avoid the problems with MMAP mismatches earlier.
The accurate time stamping is only implemented for
the default trace clock for now.

v2: Use _skl prefix. Check for default clock_id.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-3-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86: Add a native_perf_sched_clock_from_tsc()
Andi Kleen [Sun, 10 May 2015 19:22:39 +0000 (12:22 -0700)]
perf/x86: Add a native_perf_sched_clock_from_tsc()

PEBSv3 has a raw TSC time stamp in its memory buffer that
later needs to to be converted to perf_clock.

Add a native_sched_clock_from_tsc() that works the same
as native_sched_clock(), but starts with an already given
TSC value.

Paravirt is ignored, it will just get the native clock.
But there isn't a para virtualized PEBS anyway.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Link: http://lkml.kernel.org/r/1431285767-27027-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/pt: Add new timing packet enables
Alexander Shishkin [Thu, 30 Jul 2015 13:15:31 +0000 (16:15 +0300)]
perf/x86/intel/pt: Add new timing packet enables

Intel PT chapter in the new Intel Architecture SDM adds several packets
corresponding enable bits and registers that control packet generation.
Also, additional bits in the Intel PT CPUID leaf were added to enumerate
presence and parameters of these new packets and features.

The packets and enables are:

  * CYC: cycle accurate mode, provides the number of cycles elapsed since
    previous CYC packet; its presence and available threshold values are
    enumerated via CPUID;

  * MTC: mini time counter packets, used for tracking TSC time between
    full TSC packets; its presence and available resolution options are
    enumerated via CPUID;

  * PSB packet period is now configurable, available period values are
    enumerated via CPUID.

This patch adds corresponding bit and register definitions, pmu driver
capabilities based on CPUID enumeration, new attribute format bits for
the new featurens and extends event configuration validation function
to take these into account.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1438262131-12725-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/pt: Do not force sync packets on every schedule-in
Alexander Shishkin [Thu, 30 Jul 2015 13:48:24 +0000 (16:48 +0300)]
perf/x86/intel/pt: Do not force sync packets on every schedule-in

Currently, the PT driver zeroes out the status register every time before
starting the event. However, all the writable bits are already taken care
of in pt_handle_status() function, except the new PacketByteCnt field,
which in new versions of PT contains the number of packet bytes written
since the last sync (PSB) packet. Zeroing it out before enabling PT forces
a sync packet to be written. This means that, with the existing code, a
sync packet (PSB and PSBEND, 18 bytes in total) will be generated every
time a PT event is scheduled in.

To avoid these unnecessary syncs and save a WRMSR in the fast path, this
patch changes the default behavior to not clear PacketByteCnt field, so
that the sync packets will be generated with the period specified as
"psb_period" attribute config field. This has little impact on the trace
data as the other packets that are normally sent within PSB+ (between PSB
and PSBEND) have their own generation scenarios which do not depend on the
sync packets.

One exception where we do need to force PSB like this when tracing starts,
so that the decoder has a clear sync point in the trace. For this purpose
we aready have hw::itrace_started flag, which we are currently using to
output PERF_RECORD_ITRACE_START. This patch moves setting itrace_started
from perf core to the pmu::start, where it should still be 0 on the very
first run.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1438264104-16189-1-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/hw_breakpoints: Fix check for kernel-space breakpoints
Andy Lutomirski [Fri, 31 Jul 2015 03:32:42 +0000 (20:32 -0700)]
perf/x86/hw_breakpoints: Fix check for kernel-space breakpoints

The check looked wrong, although I think it was actually safe.  TASK_SIZE
is unnecessarily small for compat tasks, and it wasn't possible to make
a range breakpoint so large it started in user space and ended in kernel
space.

Nonetheless, let's fix up the check for the benefit of future
readers.  A breakpoint is in the kernel if either end is in the
kernel.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/136be387950e78f18cea60e9d1bef74465d0ee8f.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/hw_breakpoints: Improve range breakpoint validation
Andy Lutomirski [Fri, 31 Jul 2015 03:32:41 +0000 (20:32 -0700)]
perf/x86/hw_breakpoints: Improve range breakpoint validation

Range breakpoints will do the wrong thing if the address isn't
aligned.  While we're there, add comments about why it's safe for
instruction breakpoints.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ae25d14d61f2f43b78e0a247e469f3072df7e201.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/hw_breakpoints: Disallow kernel breakpoints unless kprobe-safe
Andy Lutomirski [Fri, 31 Jul 2015 03:32:40 +0000 (20:32 -0700)]
perf/x86/hw_breakpoints: Disallow kernel breakpoints unless kprobe-safe

Code on the kprobe blacklist doesn't want unexpected int3
exceptions. It probably doesn't want unexpected debug exceptions
either. Be safe: disallow breakpoints in nokprobes code.

On non-CONFIG_KPROBES kernels, there is no kprobe blacklist.  In
that case, disallow kernel breakpoints entirely.

It will be particularly important to keep hw breakpoints out of the
entry and NMI code once we move debug exceptions off the IST stack.

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/e14b152af99640448d895e3c2a8c2d5ee19a1325.1438312874.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_mask
Kan Liang [Wed, 24 Jun 2015 18:23:35 +0000 (11:23 -0700)]
perf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_mask

AVG_LATENCY(bit 38) is only available on MSR_OFFCORE_RSP0.
So the bit should be removed from RSP1 valid_mask.

Since RSP0 and RSP1 may have different valid_mask, intel_alt_er should
validate the config on the alternate offcore reg before replacing it.

Signed-off-by: Kan Liang <kan.liang@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1435170215-5017-1-git-send-email-kan.liang@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/lbr: Kill off intel_pmu_needs_lbr_smpl for good
Alexander Shishkin [Wed, 24 Jun 2015 10:05:49 +0000 (13:05 +0300)]
perf/x86/intel/lbr: Kill off intel_pmu_needs_lbr_smpl for good

The x86_lbr_exclusive commit (4807034248be "perf/x86: Mark Intel PT and
LBR/BTS as mutually exclusive") mistakenly moved intel_pmu_needs_lbr_smpl()
to perf_event.h, while another commit (a46a2300019 "perf: Simplify the
branch stack check") removed it in favor of needs_branch_stack().

This patch gets rid of intel_pmu_needs_lbr_smpl() for good.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1435140349-32588-3-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/bts: Drop redundant declarations
Alexander Shishkin [Wed, 24 Jun 2015 10:05:48 +0000 (13:05 +0300)]
perf/x86/intel/bts: Drop redundant declarations

Both intel_pmu_enable_bts() and intel_pmu_disable_bts() are in perf_event.h
header file, no need to have them declared again in the driver.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: acme@infradead.org
Cc: adrian.hunter@intel.com
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1435140349-32588-2-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/uncore: Use Sandy Bridge client PMU on Haswell/Broadwell
Andi Kleen [Mon, 15 Jun 2015 05:57:41 +0000 (22:57 -0700)]
perf/x86/intel/uncore: Use Sandy Bridge client PMU on Haswell/Broadwell

Haswell and Broadwell have the same uncore CBOX/ARB PMU as Sandy Bridge.
Add the respective model numbers to enable the SNB uncore PMU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1434347862-28490-2-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/uncore: Add support for ARB uncore PMU on Sandy/IvyBridge
Andi Kleen [Mon, 15 Jun 2015 05:57:40 +0000 (22:57 -0700)]
perf/x86/intel/uncore: Add support for ARB uncore PMU on Sandy/IvyBridge

Add a new "ARB" uncore PMU that is used to monitor the uncore queue
arbiter. This is useful to measure uncore queue occupancy and similar
statistics. The registers all have the same format as the
existing CBOX PMU.

Also move the event constraints from the CBOX to ARB. The 0x80+
events are ARB events and cannot be scheduled on a CBOX PMU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: eranian@google.com
Cc: kan.liang@intel.com
Link: http://lkml.kernel.org/r/1434347862-28490-1-git-send-email-andi@firstfloor.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/uncore: Remove use of macro DEFINE_PCI_DEVICE_TABLE()
Vaishali Thakkar [Fri, 17 Jul 2015 05:27:59 +0000 (10:57 +0530)]
perf/x86/intel/uncore: Remove use of macro DEFINE_PCI_DEVICE_TABLE()

The DEFINE_PCI_DEVICE_TABLE() macro is deprecated. Use
'struct pci_device_id' instead of DEFINE_PCI_DEVICE_TABLE(),
with the goal of getting rid of this macro completely.

This Coccinelle semantic patch performs this transformation:

@@
identifier a;
declarer name DEFINE_PCI_DEVICE_TABLE;
initializer i;
@@
- DEFINE_PCI_DEVICE_TABLE(a)
+ const struct pci_device_id a[] = i;

Signed-off-by: Vaishali Thakkar <vthakkar1994@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150717052759.GA6265@vaishali-Ideapad-Z570
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agoperf/x86/intel/rapl: Add support for Knights Landing (KNL)
Dasaratharaman Chandramouli [Tue, 26 May 2015 18:47:39 +0000 (11:47 -0700)]
perf/x86/intel/rapl: Add support for Knights Landing (KNL)

Knights Landing DRAM RAPL supports PKG and DRAM RAPL domains.
DRAM RAPL has a different fixed energy unit (2^-16J) similar to
that of HSW.

Signed-off-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Stephane Eranian <eranian@google.com>
Acked-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jacob Pan Jun <jacob.jun.pan@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nikhil Rao <nikhil.rao@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/aa63b4a3af3160152fea1a10c807f4200527280c.1432665809.git.dasaratharaman.chandramouli@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
9 years agouprobes: Fix the waitqueue_active() check in xol_free_insn_slot()
Oleg Nesterov [Tue, 21 Jul 2015 13:40:36 +0000 (15:40 +0200)]
uprobes: Fix the waitqueue_active() check in xol_free_insn_slot()

The xol_free_insn_slot()->waitqueue_active() check is buggy. We
need mb() after we set the conditon for wait_event(), or
xol_take_insn_slot() can miss the wakeup.

Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Pratyush Anand <panand@redhat.com>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20150721134036.GA4799@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>