Commit graph

299610 commits

Author SHA1 Message Date
Ingo Molnar
c4f400e837 Merge branch 'tip/perf/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into perf/core 2012-05-09 19:34:30 +02:00
Peter Zijlstra
cb04ff9ac4 sched, perf: Use a single callback into the scheduler
We can easily use a single callback for both sched-in and sched-out. This
reduces the code footprint in the scheduler path as well as removes
the PMU black spot otherwise present between the out and in callback.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-o56ajxp1edwqg6x9d31wb805@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:17 +02:00
Robert Richter
8b1e13638d perf/x86-ibs: Fix usage of IBS op current count
The value of IbsOpCurCnt rolls over when it reaches IbsOpMaxCnt. Thus,
it is reset to zero by hardware. To get the correct count we need to
add the max count to it in case we received an ibs sample (valid bit
set).

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-13-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:17 +02:00
Robert Richter
fc5fb2b5e1 perf/x86-ibs: Catch spurious interrupts after stopping IBS
After disabling IBS there could be still incomming NMIs with samples
that even have the valid bit cleared. Mark all this NMIs as handled to
avoid spurious interrupt messages.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-12-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:16 +02:00
Robert Richter
c9574fe0bd perf/x86-ibs: Implement workaround for IBS erratum #420
When disabling ibs there might be the case where hardware continuously
generates interrupts. This is described in erratum #420 (Instruction-
Based Sampling Engine May Generate Interrupt that Cannot Be Cleared).
To avoid this we must clear the counter mask first and then clear the
enable bit. This patch implements this.

See Revision Guide for AMD Family 10h Processors, Publication #41322.

Note: We now keep track of the last read ibs config value which is
then used to disable ibs. To update the config value we pass now a
pointer to the functions reading it.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-11-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:16 +02:00
Robert Richter
7caaf4d824 perf/x86-ibs: Extend hw period that triggers overflow
If the last hw period is too short we might hit the irq handler which
biases the results. Thus try to have a max last period that triggers
the sw overflow.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-10-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:15 +02:00
Robert Richter
fc006cf7cc perf/x86-ibs: Trigger overflow if remaining period is too small
There are cases where the remaining period is smaller than the minimal
possible value. In this case the counter is restarted with the minimal
period. This is of no use as the interrupt handler will trigger
immediately again and most likely hits itself. This biases the
results.

So, if the remaining period is within the min range, we better do not
restart the counter and instead trigger the overflow.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-9-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:15 +02:00
Robert Richter
98112d2e95 perf/x86-ibs: Rename some variables
Simple patch that just renames some variables for better
understanding.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-8-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:14 +02:00
Robert Richter
450bbd493d perf/x86-ibs: Precise event sampling with IBS for AMD CPUs
This patch adds support for precise event sampling with IBS. There are
two counting modes to count either cycles or micro-ops. If the
corresponding performance counter events (hw events) are setup with
the precise flag set, the request is redirected to the ibs pmu:

 perf record -a -e cpu-cycles:p ...    # use ibs op counting cycle count
 perf record -a -e r076:p ...          # same as -e cpu-cycles:p
 perf record -a -e r0C1:p ...          # use ibs op counting micro-ops

Each ibs sample contains a linear address that points to the
instruction that was causing the sample to trigger. With ibs we have
skid 0. Thus, ibs supports precise levels 1 and 2. Samples are marked
with the PERF_EFLAGS_EXACT flag set. In rare cases the rip is invalid
when IBS was not able to record the rip correctly. Then the
PERF_EFLAGS_EXACT flag is cleared and the rip is taken from pt_regs.

V2:
* don't drop samples in precise level 2 if rip is invalid, instead
  support the PERF_EFLAGS_EXACT flag

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120502103309.GP18810@erda.amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:14 +02:00
Robert Richter
d47e8238cd perf/x86-ibs: Take instruction pointer from ibs sample
Each IBS sample contains a linear address of the instruction that
caused the sample to trigger. This address is more precise than the
rip that was taken from the interrupt handler's stack. Update the rip
with that address. We use this in the next patch to implement
precise-event sampling on AMD systems using IBS.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-6-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:13 +02:00
Robert Richter
6accb9cf76 perf/x86-ibs: Fix frequency profiling
Fixing profiling at a fixed frequency, in this case the freq value and
sample period was setup incorrectly. Since sampling periods are
adjusted we also allow periods that have lower 4 bits set.

Another fix is the setup of the hw counter: If we modify
hwc->sample_period, we also need to update hwc->last_period and
hwc->period_left.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-5-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:13 +02:00
Robert Richter
7bf352384f perf/x86-ibs: Enable ibs op micro-ops counting mode
Allow enabling ibs op micro-ops counting mode.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-4-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:12 +02:00
Robert Richter
fd0d000b2c perf: Pass last sampling period to perf_sample_data_init()
We always need to pass the last sample period to
perf_sample_data_init(), otherwise the event distribution will be
wrong. Thus, modifiyng the function interface with the required period
as argument. So basically a pattern like this:

        perf_sample_data_init(&data, ~0ULL);
        data.period = event->hw.last_period;

will now be like that:

        perf_sample_data_init(&data, ~0ULL, event->hw.last_period);

Avoids unininitialized data.period and simplifies code.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-3-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:12 +02:00
Robert Richter
c75841a398 perf/x86-ibs: Fix update of period
The last sw period was not correctly updated on overflow and thus led
to wrong distribution of events. We always need to properly initialize
data.period in struct perf_sample_data.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333390758-10893-2-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-09 15:23:11 +02:00
Ingo Molnar
ad8537cda6 Merge branch 'perf/x86-ibs' into perf/core 2012-05-09 15:22:23 +02:00
Steven Rostedt
68179686ac tracing: Remove ftrace_disable/enable_cpu()
The ftrace_disable_cpu() and ftrace_enable_cpu() functions were
needed back before the ring buffer was lockless. Now that the
ring buffer is lockless (and has been for some time), these functions
serve no purpose, and unnecessarily slow down operations of the tracer.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-08 21:06:26 -04:00
Jiri Olsa
50e18b94c6 tracing: Use seq_*_private interface for some seq files
It's appropriate to use __seq_open_private interface to open
some of trace seq files, because it covers all steps we are
duplicating in tracing code - zallocating the iterator and
setting it as seq_file's private.

Using this for following files:
  trace
  available_filter_functions
  enabled_functions

Link: http://lkml.kernel.org/r/1335342219-2782-5-git-send-email-jolsa@redhat.com

Signed-off-by: Jiri Olsa <jolsa@redhat.com>

[
 Fixed warnings for:
   kernel/trace/trace.c: In function '__tracing_open':
   kernel/trace/trace.c:2418:11: warning: unused variable 'ret' [-Wunused-variable]
   kernel/trace/trace.c:2417:19: warning: unused variable 'm' [-Wunused-variable]
]

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-08 21:04:12 -04:00
Ingo Molnar
149936a068 Merge branch 'perf/annotate' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Perf annotate browser improvements:

 - Get back the line separating the overheads from the disassembly, requested by
   Peter Zijlstra, Linus agreed now that it is a solid line and more column real
   state was harvested. Also it has the jump->arrow lines separated from it by
   the address/jump target column.

 - Don't change asm line color when toggling source code view. Requested by
   Peter Zijlstra.

Current snapshot:

 avtab_search_node
        │      push   %rbp
        │      mov    %rsp,%rbp
        │    → callq  mcount
        │      movzwl 0x6(%rsi),%edx
        │      and    $0x7fff,%dx
        │      test   %rdi,%rdi
        │    ↓ jne    20
   0.42 │17:┌─→xor    %eax,%eax
        │19:│  leaveq
   0.42 │   │← retq
        │   │  nopl   0x0(%rax,%rax,1)
        │20:│  mov    (%rdi),%rax
   0.08 │   │  test   %rax,%rax
        │   └──je     17
        │      movzwl (%rsi),%ecx
        │      movzwl 0x2(%rsi),%r9d
        │      movzwl 0x4(%rsi),%r8d
        │      movzwl %cx,%esi
        │      movzwl %r9w,%r10d
        │      shl    $0x9,%esi
        │      lea    (%rsi,%r10,4),%esi
        │      lea    (%r8,%rsi,1),%esi
        │      and    0x10(%rdi),%si
        │      movzwl %si,%esi
        │      mov    (%rax,%rsi,8),%rax
   1.01 │      test   %rax,%rax
        │    ↑ je     19
        │      nopw   0x0(%rax,%rax,1)
   3.19 │60:   cmp    %cx,(%rax)
        │    ↓ jne    7e
   0.08 │      cmp    %r9w,0x2(%rax)
        │    ↓ jne    7e
        │      cmp    %r8w,0x4(%rax)
        │    ↓ jne    79
        │      test   %dx,0x6(%rax)
        │    ↑ jne    19
        │79:   cmp    %r8w,0x4(%rax)
  83.45 │7e: ↑ ja     17
   3.36 │      mov    0x10(%rax),%rax
   7.98 │      test   %rax,%rax
        │    ↑ jne    60
        │      leaveq
        │    ← retq

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-05-08 16:55:15 +02:00
Minho Ban
b02ee9a33b tracing: Prevent wasting time evaluating parameters in trace_preempt_on/off
This fixes spending time for evaluating parameters in trace_preempt_on/off when
the tracer config is off.

The patch mainly inspired by Steven Rostedt, thanks Steven.

Link: http://lkml.kernel.org/r/4FA73510.7070705@samsung.com

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Turner <pjt@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Minho Ban <mhban@samsung.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-08 09:33:52 -04:00
Arnaldo Carvalho de Melo
b9818e9375 perf annotate browser: Compact 'nop' output
Just suppress the nop operands, future infrastructure that will record
the instruction lenght (and its contents) in struct ins will allow
rendering them as nopN, i.e. nop5 for a 5-byte nop.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qddbeglfzqdlal8vj2yaj67y@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-07 19:00:42 -03:00
Arnaldo Carvalho de Melo
5417072bf6 perf annotate browser: Do raw printing in 'o'ffset in a single place
Instead of doing the same in all ins scnprintf methods.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-8mfairi2n1nentoa852alazv@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-07 18:54:16 -03:00
Ingo Molnar
19631cb3d6 Merge branch 'tip/perf/core-4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into perf/core 2012-05-07 11:03:52 +02:00
Steven Rostedt
59a094c994 ftrace/x86: Use asm/kprobes.h instead of linux/kprobes.h
If CONFIG_KPROBES is not set, then linux/kprobes.h will not include
asm/kprobes.h needed by x86/ftrace.c for the BREAKPOINT macro.

The x86/ftrace.c file should just include asm/kprobes.h as it does not
need the rest of kprobes.

Reported-by: Ingo Molnar <mingo@elte.hu>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-05-04 09:28:29 -04:00
Arnaldo Carvalho de Melo
64aa17ca5a perf annotate browser: Don't change the asm line color when toggling source
Gets confusing. Remains to be chosen an appropriate different color for
source code.

This effectively reverts 58e817d997 ("perf annotate: Print asm code as
blue when source code is displayed")

Requested-by: Peter Zijlstra <peterz@infradead.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-qy9iq32nj3uqe5dbiuq9e3j9@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-03 13:23:00 -03:00
Arnaldo Carvalho de Melo
83b1f2aad4 perf annotate browser: More clearly separate columns
The first column (columns in the near future) are for the per line event
overhead(s), that only appear when they are not zero.

To clearly separate it, add back a solid vertical line, with just one
colour, not influenced by the per line overheads.

Then have the addr/offset column, then optionally the dynamic
(static in the future) jump->target arrows, if 'j' enables it.

Then the instructions.

Requested-by: Peter Zijlstra <peterz@infradead.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-r415t4sps0oyr9y8kd9j7clz@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-03 13:12:49 -03:00
Arnaldo Carvalho de Melo
4656cca11b perf ui browser: Introduce routine to draw vertical line
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-umb4jlu0ee8r2rc3x4jkahgk@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-05-03 13:07:05 -03:00
Steven Rostedt
4a6d70c950 ftrace/x86: Remove the complex ftrace NMI handling code
As ftrace function tracing would require modifying code that could
be executed in NMI context, which is not stopped with stop_machine(),
ftrace had to do a complex algorithm with various stages of setup
and memory barriers to make it work.

With the new breakpoint method, this is no longer required. The changes
to the code can be done without any problem in NMI context, as well as
without stop machine altogether. Remove the complex code as it is
no longer needed.

Also, a lot of the notrace annotations could be removed from the
NMI code as it is now safe to trace them. With the exception of
do_nmi itself, which does some special work to handle running in
the debug stack. The breakpoint method can cause NMIs to double
nest the debug stack if it's not setup properly, and that is done
in do_nmi(), thus that function must not be traced.

(Note the arch sh may want to do the same)

Cc: Paul Mundt <lethal@linux-sh.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-04-27 21:11:28 -04:00
Steven Rostedt
08d636b6d4 ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine
This method changes x86 to add a breakpoint to the mcount locations
instead of calling stop machine.

Now that iret can be handled by NMIs, we perform the following to
update code:

1) Add a breakpoint to all locations that will be modified

2) Sync all cores

3) Update all locations to be either a nop or call (except breakpoint
   op)

4) Sync all cores

5) Remove the breakpoint with the new code.

6) Sync all cores

[
  Added updates that Masami suggested:
   Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient.
   Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier.
]

Cc: H. Peter Anvin <hpa@zytor.com>
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-04-27 21:10:44 -04:00
Arnaldo Carvalho de Melo
0822cc80d9 perf annotate browser: Don't display 0.00 percentages
Cleaning up more the output.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-81pimnsnaa9y2j0a9plstu1c@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-27 17:13:53 -03:00
Arnaldo Carvalho de Melo
3e8b5ddf17 perf annotate browser: Remove the vertical line after the percentages
It is confusing when used with jump -> target lines.

Requested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xeiyfsxptwtmlvowledg6wpy@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-27 16:44:56 -03:00
Arnaldo Carvalho de Melo
9d1ef56d57 perf annotate browser: Show current jump, back or forward
Instead of trying to show the current loop by naively looking for the
next backward jump, just use 'j' to toggle showing arrows connecting
jump with its target.

And do it for forward jumps as well.

Loop detection requires more code to follow the flow control, etc.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-soahcn1lz2u4wxj31ch0594j@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-27 16:35:29 -03:00
Arnaldo Carvalho de Melo
944e1abed9 perf ui browser: Add method to draw up/down arrow line
It figures out the direction and draws downwards arrows too if that is
the case.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-tg329nr7q4dg9d0tl3o0wywg@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-27 16:27:52 -03:00
Arnaldo Carvalho de Melo
88298f5a52 perf annotate browser: Add a right arrow before call instructions
The counterpart of 'ret' instructions.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-jlz2ldaquaow0rqi2vr4b91l@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-27 15:10:54 -03:00
Ingo Molnar
1fa2e84db3 Fixes on top of the previous perf/annotate pull request
. Sometimes a jump points to an offset with no instructions, make the
   mark jump targets function handle that, for now just ignoring such
   jump targets, more investigation is needed to figure out how to cope
   with that.
 
 . Handle jump targets that are outside the function, for now just don't
   try to draw the connector arrow, right thing seems to be to mark this
   jump with a -> (right arrow) and handle it like a callq.
 
 Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJPmWGLAAoJENZQFvNTUqpAPwYP/1HLjBtDkh9IvyQdixHQsrsK
 D3SUajTPYVOwUMMtLKerpcEWgl8TvyBYiciYDTlVF3vPkIifKPV6VC3AUWfPuuOq
 RRZK8fwTiU8SBzBs0TbDs3uSMUh6HyICd8NrsCKVvydtAvtCErIPFZYQRcUbsLcS
 i5om0HTyeDUmvGMFZldrpu7rkaM5VoWp5cMCUFLHOFxg4b3dZV5Xuejn3M4vrtnN
 UNH5Koy6jBb+B9Y1QV28PUxs56nab2n+/04ruvGVwcpxjk939/eZKTxda4VkeRZs
 rev4yh9yf3xnvnDC2T/6hNM+gaapsSx5ho26qjh2Wn1Ty3OOlNo0dl5OygNCF0iX
 W0XrBzCh8O5V7Wyd8enY94tt97S8OYUhQmNXz1ee+1C284+vWNteIIhHK7MKOWn4
 Xy0RGU4qLQCKX/xZFmlRhlyoil5lS7R/zVUoimOaI3P4cjqsXHFZ+PT+8QkHu6Wk
 nisD2XX/v/gdkNuZ5ItW/gseAulp9JdvzHzTcrfPBnG+z1NqOvnPTmW3tlnvqmsb
 S4S0try6tbH2hBblP5Z4n7LiEudiD0tsdvqbnD0FvyhCTJkSP84q7rP6tUpqmLbH
 4EbRBRNQafjN08c4kZk8085C86iKSMQ0LKOJ+TOjJLsEHZzYtghNgOOVktKrnh8r
 n6cLlI0wE2iYFhZ92Oc9
 =NPh2
 -----END PGP SIGNATURE-----

Merge tag 'perf-annotate-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core

Annotation improvements:

Now the default annotate browser uses a much more compact format, implementing
suggestions made made by several people, notably Linus.

Here is part of the new __list_del_entry() annotation:

__list_del_entry
    8.47 │      push   %rbp
    8.47 │      mov    (%rdi),%rdx
   20.34 │      mov    $0xdead000000100100,%rcx
    3.39 │      mov    0x8(%rdi),%rax
    0.00 │      mov    %rsp,%rbp
    1.69 │      cmp    %rcx,%rdx
    0.00 │      je     43
    1.69 │      mov    $0xdead000000200200,%rcx
    3.39 │      cmp    %rcx,%rax
    0.00 │      je     a3
    5.08 │      mov    (%rax),%r8
   18.64 │      cmp    %r8,%rdi
    0.00 │      jne    84
    1.69 │      mov    0x8(%rdx),%r8
   25.42 │      cmp    %r8,%rdi
    0.00 │      jne    65
    1.69 │      mov    %rax,0x8(%rdx)
    0.00 │      mov    %rdx,(%rax)
    0.00 │      leaveq
    0.00 │      retq
    0.00 │ 43:  mov    %rdx,%r8
    0.00 │      mov    %rdi,%rcx
    0.00 │      mov    $0xffffffff817cd6a8,%rdx
    0.00 │      mov    $0x31,%esi
    0.00 │      mov    $0xffffffff817cd6e0,%rdi
    0.00 │      xor    %eax,%eax
    0.00 │      callq  ffffffff8104eab0 <warn_slowpath_fmt>
    0.00 │      leaveq
    0.00 │      retq
    0.00 │ 65:  mov    %rdi,%rcx
    0.00 │      mov    $0xffffffff817cd780,%rdx
    0.00 │      mov    $0x3a,%esi
    0.00 │      mov    $0xffffffff817cd6e0,%rdi
    0.00 │      xor    %eax,%eax
    0.00 │      callq  ffffffff8104eab0 <warn_slowpath_fmt>
    0.00 │      leaveq
    0.00 │      retq

The infrastructure is there to provide formatters for any instruction,
like the one I'll do for call functions to elide the address.

Further fixes on top of the first iteration:

- Sometimes a jump points to an offset with no instructions, make the
  mark jump targets function handle that, for now just ignoring such
  jump targets, more investigation is needed to figure out how to cope
  with that.

- Handle jump targets that are outside the function, for now just don't
  try to draw the connector arrow, right thing seems to be to mark this
  jump with a -> (right arrow) and handle it like a callq.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-27 09:00:50 +02:00
Robert Richter
392d65a9ad perf: Remove PERF_COUNTERS config option
Renaming remaining PERF_COUNTERS options into PERF_EVENTS.

Think we can get rid of PERF_COUNTERS now.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333643084-26776-5-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-26 13:52:52 +02:00
Robert Richter
33b07b8be7 perf: Use static variant of perf_event_overflow in core.c
No need to have an additional function layer.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333643084-26776-4-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-26 13:52:52 +02:00
Robert Richter
5f09fc6889 perf/x86: Fix cmpxchg() usage in amd_put_event_constraints()
Now the return value of cmpxchg() is used to match an event. The
change removes the duplicate event comparison and traverses the list
until an event was removed. This also fixes the following warning:

 arch/x86/kernel/cpu/perf_event_amd.c:170: warning: value computed is not used

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333643084-26776-3-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-26 13:52:51 +02:00
Robert Richter
10c250234c perf: Trivial cleanup of duplicate code
Removing duplicate code.

Signed-off-by: Robert Richter <robert.richter@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1333643084-26776-2-git-send-email-robert.richter@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-26 13:52:50 +02:00
Arnaldo Carvalho de Melo
38b31bd0ce perf annotate browser: Don't draw jump connectors for out of function jumps
As described in the previous patch. Next step is to properly label those
jumps by using a -> arrow, i.e. not backwards/forwards, and allow the
user to navigate to this other function when enter or -> is pressed.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-ax2sss463eu88wgl9ee8a6b6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-25 14:18:42 -03:00
Arnaldo Carvalho de Melo
fb29fa58e3 perf annotate: Mark jump instructions with no offset
I.e. jumps that go to code outside the current function, that is denoted
in objdump -dS as:

   399f877a9f: jne    399f87bcf4 <_L_lock_5154>

I.e. without the + after the name of the current function, like in:

   399f877aa5: jmp    399f877ab2 <_int_free+0x412>

The browser will use that info to avoid drawing connectors to the start
of the function, since ops.target.addr was zero.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-xrn35g2mlawz1ydo1p73w3q6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-25 14:16:03 -03:00
Arnaldo Carvalho de Melo
44d1a3edfb perf annotate: Disambiguage offsets and addresses in operands
We were using ins_ops->target for callq addresses and jump offsets,
disambiguate by having ins_ops->target.addr and ins_ops->target.offset.

For jumps we'll need both to fixup lines that don't have an offset on
the <> part.

Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-3nlcmstua75u07ao7wja1rwx@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-25 08:00:23 -03:00
Ingo Molnar
fab06992de perf/x86: Clean up register_nmi_handler() usage
A function name represents the pointer to it - no need to take the
address of it. (Fixing this helps us introduce some macro magic
around register_nmi_handler() in the future.)

Cc: Robert Richter <robert.richter@amd.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-25 12:56:26 +02:00
Arnaldo Carvalho de Melo
9481ede909 perf annotate browser: Handle NULL jump targets
In annotate_browser__mark_jump_targets

702                     dlt = browser->offsets[dl->ops.target];
703                     bdlt = disasm_line__browser(dlt);
704                     bdlt->jump_target = true;
705             }
706
707     }

(gdb) p size
$5 = 2415
(gdb) p offset
$6 = 140
(gdb) p dl->ops.target
$7 = 143
(gdb) p browser->offsets[143]
$8 = (struct disasm_line *) 0x0
(gdb) p dl->name
$9 = 0x2363bd0 "je"
(gdb)

Really strange, the code assumed that at the jump target we would have
an assembly line, but only in the previous instruction offset we have a
'lock':

(gdb) p browser->offsets[144]
$10 = (struct disasm_line *) 0x0
(gdb) p browser->offsets[142]
$11 = (struct disasm_line *) 0x27bd620
(gdb) p browser->offsets[142]->name
$12 = 0x237a8a0 "lock"
(gdb)

I'll study this more, but for now I'll just check if there is a
disasm_line at dl->ops.target, i.e. a valid jump target.

Reported-by: Hagen Paul Pfeifer <hagen@jauu.net>
Reported-by: Ingo Molnar <mingo@kernel.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-inzjrzyqhkzyv78met2vula6@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-25 07:48:42 -03:00
Ingo Molnar
3dbe927b1e Linux 3.4-rc4
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.18 (GNU/Linux)
 
 iQEcBAABAgAGBQJPkysaAAoJEHm+PkMAQRiGw9cH/1cepvUZxu/gHZnmWX/FnL0y
 AlyYgC/kHnM4MRp6GRt7RxbnJhVHPmTDuyMS4uZI7IwBr66FZiuL2Ds3tuv5WwBY
 sN24G0plKsNTjQoNdaBsA/M84dLG71YfHehQUCx271huiAalIL3fNJnwNnJHyGQj
 78evmV3/FhoTYISiX+nf18GEzggmU4a7UaQx64VWjkC5qcy8Q0TMLOaRe2QN7atE
 AfNYEMwtOlpHGfFA2SemfD8z4yTwjfmXYfwtqqxeUeLfl9iAh9qWfY6g9UEDAhce
 N1pqA11VNZg8nqUnGizA+p1Ja/lLjpoORYGCvCHrNVMzFnckFiIc/7CJJe/knkY=
 =6w68
 -----END PGP SIGNATURE-----

Merge tag 'v3.4-rc4' into perf/core

Merge v3.4-rc4 - we were on -rc2 before.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2012-04-25 08:59:16 +02:00
Arnaldo Carvalho de Melo
a3f895be1f perf annotate browser: Initial loop detection
Simple algorithm, just look for the next backward jump that points to
before the cursor.

Then draw an arrow connecting the jump to its target.

Do this as you move the cursor, entering/exiting possible loops.

Ex (graph chars replaced to avoid mail encoding woes):

avc_has_perm_flags
    0.00 |         nopl   0x0(%rax)
    5.36 |+-> 68:  mov    (%rax),%rax
    5.15 ||        test   %rax,%rax
    0.00 ||      v je     130
    2.96 ||   74:  cmp    -0x20(%rax),%ebx
   47.38 ||        lea    -0x20(%rax),%rcx
    0.28 ||      ^ jne    68
    3.16 ||        cmp    -0x18(%rax),%dx
    0.00 |+------^ jne    68
    4.92 |         cmp    0x4(%rcx),%r13d
    0.00 |       v jne    68
    1.15 |         test   %rcx,%rcx
    0.00 |       v je     130

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-5gairf6or7dazlx3ocxwvftm@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2012-04-24 14:24:28 -03:00
Vaibhav Nagarnaik
438ced1720 ring-buffer: Add per_cpu ring buffer control files
Add a debugfs entry under per_cpu/ folder for each cpu called
buffer_size_kb to control the ring buffer size for each CPU
independently.

If the global file buffer_size_kb is used to set size, the individual
ring buffers will be adjusted to the given size. The buffer_size_kb will
report the common size to maintain backward compatibility.

If the buffer_size_kb file under the per_cpu/ directory is used to
change buffer size for a specific CPU, only the size of the respective
ring buffer is updated. When tracing/buffer_size_kb is read, it reports
'X' to indicate that sizes of per_cpu ring buffers are not equivalent.

Link: http://lkml.kernel.org/r/1328212844-11889-1-git-send-email-vnagarnaik@google.com

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Michael Rubin <mrubin@google.com>
Cc: David Sharp <dhsharp@google.com>
Cc: Justin Teravest <teravest@google.com>
Signed-off-by: Vaibhav Nagarnaik <vnagarnaik@google.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-04-23 21:17:51 -04:00
Dan Carpenter
5a26c8f0cf tracing: Remove an unneeded check in trace_seq_buffer()
memcpy() returns a pointer to "bug".  Hopefully, it's not NULL here or
we would already have Oopsed.

Link: http://lkml.kernel.org/r/20120420063145.GA22649@elgon.mountain

Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-04-23 21:16:10 -04:00
Steven Rostedt
07d777fe8c tracing: Add percpu buffers for trace_printk()
Currently, trace_printk() uses a single buffer to write into
to calculate the size and format needed to save the trace. To
do this safely in an SMP environment, a spin_lock() is taken
to only allow one writer at a time to the buffer. But this could
also affect what is being traced, and add synchronization that
would not be there otherwise.

Ideally, using percpu buffers would be useful, but since trace_printk()
is only used in development, having per cpu buffers for something
never used is a waste of space. Thus, the use of the trace_bprintk()
format section is changed to be used for static fmts as well as dynamic ones.
Then at boot up, we can check if the section that holds the trace_printk
formats is non-empty, and if it does contain something, then we
know a trace_printk() has been added to the kernel. At this time
the trace_printk per cpu buffers are allocated. A check is also
done at module load time in case a module is added that contains a
trace_printk().

Once the buffers are allocated, they are never freed. If you use
a trace_printk() then you should know what you are doing.

A buffer is made for each type of context:

  normal
  softirq
  irq
  nmi

The context is checked and the appropriate buffer is used.
This allows for totally lockless usage of trace_printk(),
and they no longer even disable interrupts.

Requested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2012-04-23 21:15:55 -04:00
Linus Torvalds
66f75a5d02 Linux 3.4-rc4 2012-04-21 14:47:52 -07:00
Yong Zhang
e9a5ea1852 sparc32,leon: add notify_cpu_starting()
Otherwise cpu_active_mask will not set, which lead to other issue.

Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Signed-off-by: Konrad Eisele <konrad@gaisler.com>
Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-04-21 16:35:06 -04:00