ring-buffer: Make benchmark handle missed events
[GitHub/mt8127/android_kernel_alcatel_ttab.git] / Documentation / trace / ftrace.txt
1 ftrace - Function Tracer
2 ========================
3
4 Copyright 2008 Red Hat Inc.
5 Author: Steven Rostedt <srostedt@redhat.com>
6 License: The GNU Free Documentation License, Version 1.2
7 (dual licensed under the GPL v2)
8 Reviewers: Elias Oltmanns, Randy Dunlap, Andrew Morton,
9 John Kacur, and David Teigland.
10 Written for: 2.6.28-rc2
11
12 Introduction
13 ------------
14
15 Ftrace is an internal tracer designed to help out developers and
16 designers of systems to find what is going on inside the kernel.
17 It can be used for debugging or analyzing latencies and
18 performance issues that take place outside of user-space.
19
20 Although ftrace is the function tracer, it also includes an
21 infrastructure that allows for other types of tracing. Some of
22 the tracers that are currently in ftrace include a tracer to
23 trace context switches, the time it takes for a high priority
24 task to run after it was woken up, the time interrupts are
25 disabled, and more (ftrace allows for tracer plugins, which
26 means that the list of tracers can always grow).
27
28
29 Implementation Details
30 ----------------------
31
32 See ftrace-design.txt for details for arch porters and such.
33
34
35 The File System
36 ---------------
37
38 Ftrace uses the debugfs file system to hold the control files as
39 well as the files to display output.
40
41 When debugfs is configured into the kernel (which selecting any ftrace
42 option will do) the directory /sys/kernel/debug will be created. To mount
43 this directory, you can add to your /etc/fstab file:
44
45 debugfs /sys/kernel/debug debugfs defaults 0 0
46
47 Or you can mount it at run time with:
48
49 mount -t debugfs nodev /sys/kernel/debug
50
51 For quicker access to that directory you may want to make a soft link to
52 it:
53
54 ln -s /sys/kernel/debug /debug
55
56 Any selected ftrace option will also create a directory called tracing
57 within the debugfs. The rest of the document will assume that you are in
58 the ftrace directory (cd /sys/kernel/debug/tracing) and will only concentrate
59 on the files within that directory and not distract from the content with
60 the extended "/sys/kernel/debug/tracing" path name.
61
62 That's it! (assuming that you have ftrace configured into your kernel)
63
64 After mounting the debugfs, you can see a directory called
65 "tracing". This directory contains the control and output files
66 of ftrace. Here is a list of some of the key files:
67
68
69 Note: all time values are in microseconds.
70
71 current_tracer:
72
73 This is used to set or display the current tracer
74 that is configured.
75
76 available_tracers:
77
78 This holds the different types of tracers that
79 have been compiled into the kernel. The
80 tracers listed here can be configured by
81 echoing their name into current_tracer.
82
83 tracing_enabled:
84
85 This sets or displays whether the current_tracer
86 is activated and tracing or not. Echo 0 into this
87 file to disable the tracer or 1 to enable it.
88
89 trace:
90
91 This file holds the output of the trace in a human
92 readable format (described below).
93
94 trace_pipe:
95
96 The output is the same as the "trace" file but this
97 file is meant to be streamed with live tracing.
98 Reads from this file will block until new data is
99 retrieved. Unlike the "trace" file, this file is a
100 consumer. This means reading from this file causes
101 sequential reads to display more current data. Once
102 data is read from this file, it is consumed, and
103 will not be read again with a sequential read. The
104 "trace" file is static, and if the tracer is not
105 adding more data,they will display the same
106 information every time they are read.
107
108 trace_options:
109
110 This file lets the user control the amount of data
111 that is displayed in one of the above output
112 files.
113
114 tracing_max_latency:
115
116 Some of the tracers record the max latency.
117 For example, the time interrupts are disabled.
118 This time is saved in this file. The max trace
119 will also be stored, and displayed by "trace".
120 A new max trace will only be recorded if the
121 latency is greater than the value in this
122 file. (in microseconds)
123
124 buffer_size_kb:
125
126 This sets or displays the number of kilobytes each CPU
127 buffer can hold. The tracer buffers are the same size
128 for each CPU. The displayed number is the size of the
129 CPU buffer and not total size of all buffers. The
130 trace buffers are allocated in pages (blocks of memory
131 that the kernel uses for allocation, usually 4 KB in size).
132 If the last page allocated has room for more bytes
133 than requested, the rest of the page will be used,
134 making the actual allocation bigger than requested.
135 ( Note, the size may not be a multiple of the page size
136 due to buffer management overhead. )
137
138 This can only be updated when the current_tracer
139 is set to "nop".
140
141 tracing_cpumask:
142
143 This is a mask that lets the user only trace
144 on specified CPUS. The format is a hex string
145 representing the CPUS.
146
147 set_ftrace_filter:
148
149 When dynamic ftrace is configured in (see the
150 section below "dynamic ftrace"), the code is dynamically
151 modified (code text rewrite) to disable calling of the
152 function profiler (mcount). This lets tracing be configured
153 in with practically no overhead in performance. This also
154 has a side effect of enabling or disabling specific functions
155 to be traced. Echoing names of functions into this file
156 will limit the trace to only those functions.
157
158 set_ftrace_notrace:
159
160 This has an effect opposite to that of
161 set_ftrace_filter. Any function that is added here will not
162 be traced. If a function exists in both set_ftrace_filter
163 and set_ftrace_notrace, the function will _not_ be traced.
164
165 set_ftrace_pid:
166
167 Have the function tracer only trace a single thread.
168
169 set_graph_function:
170
171 Set a "trigger" function where tracing should start
172 with the function graph tracer (See the section
173 "dynamic ftrace" for more details).
174
175 available_filter_functions:
176
177 This lists the functions that ftrace
178 has processed and can trace. These are the function
179 names that you can pass to "set_ftrace_filter" or
180 "set_ftrace_notrace". (See the section "dynamic ftrace"
181 below for more details.)
182
183
184 The Tracers
185 -----------
186
187 Here is the list of current tracers that may be configured.
188
189 "function"
190
191 Function call tracer to trace all kernel functions.
192
193 "function_graph"
194
195 Similar to the function tracer except that the
196 function tracer probes the functions on their entry
197 whereas the function graph tracer traces on both entry
198 and exit of the functions. It then provides the ability
199 to draw a graph of function calls similar to C code
200 source.
201
202 "sched_switch"
203
204 Traces the context switches and wakeups between tasks.
205
206 "irqsoff"
207
208 Traces the areas that disable interrupts and saves
209 the trace with the longest max latency.
210 See tracing_max_latency. When a new max is recorded,
211 it replaces the old trace. It is best to view this
212 trace with the latency-format option enabled.
213
214 "preemptoff"
215
216 Similar to irqsoff but traces and records the amount of
217 time for which preemption is disabled.
218
219 "preemptirqsoff"
220
221 Similar to irqsoff and preemptoff, but traces and
222 records the largest time for which irqs and/or preemption
223 is disabled.
224
225 "wakeup"
226
227 Traces and records the max latency that it takes for
228 the highest priority task to get scheduled after
229 it has been woken up.
230
231 "hw-branch-tracer"
232
233 Uses the BTS CPU feature on x86 CPUs to traces all
234 branches executed.
235
236 "nop"
237
238 This is the "trace nothing" tracer. To remove all
239 tracers from tracing simply echo "nop" into
240 current_tracer.
241
242
243 Examples of using the tracer
244 ----------------------------
245
246 Here are typical examples of using the tracers when controlling
247 them only with the debugfs interface (without using any
248 user-land utilities).
249
250 Output format:
251 --------------
252
253 Here is an example of the output format of the file "trace"
254
255 --------
256 # tracer: function
257 #
258 # TASK-PID CPU# TIMESTAMP FUNCTION
259 # | | | | |
260 bash-4251 [01] 10152.583854: path_put <-path_walk
261 bash-4251 [01] 10152.583855: dput <-path_put
262 bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
263 --------
264
265 A header is printed with the tracer name that is represented by
266 the trace. In this case the tracer is "function". Then a header
267 showing the format. Task name "bash", the task PID "4251", the
268 CPU that it was running on "01", the timestamp in <secs>.<usecs>
269 format, the function name that was traced "path_put" and the
270 parent function that called this function "path_walk". The
271 timestamp is the time at which the function was entered.
272
273 The sched_switch tracer also includes tracing of task wakeups
274 and context switches.
275
276 ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
277 ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
278 ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
279 events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
280 kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
281 ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
282
283 Wake ups are represented by a "+" and the context switches are
284 shown as "==>". The format is:
285
286 Context switches:
287
288 Previous task Next Task
289
290 <pid>:<prio>:<state> ==> <pid>:<prio>:<state>
291
292 Wake ups:
293
294 Current task Task waking up
295
296 <pid>:<prio>:<state> + <pid>:<prio>:<state>
297
298 The prio is the internal kernel priority, which is the inverse
299 of the priority that is usually displayed by user-space tools.
300 Zero represents the highest priority (99). Prio 100 starts the
301 "nice" priorities with 100 being equal to nice -20 and 139 being
302 nice 19. The prio "140" is reserved for the idle task which is
303 the lowest priority thread (pid 0).
304
305
306 Latency trace format
307 --------------------
308
309 When the latency-format option is enabled, the trace file gives
310 somewhat more information to see why a latency happened.
311 Here is a typical trace.
312
313 # tracer: irqsoff
314 #
315 irqsoff latency trace v1.1.5 on 2.6.26-rc8
316 --------------------------------------------------------------------
317 latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
318 -----------------
319 | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
320 -----------------
321 => started at: apic_timer_interrupt
322 => ended at: do_softirq
323
324 # _------=> CPU#
325 # / _-----=> irqs-off
326 # | / _----=> need-resched
327 # || / _---=> hardirq/softirq
328 # ||| / _--=> preempt-depth
329 # |||| /
330 # ||||| delay
331 # cmd pid ||||| time | caller
332 # \ / ||||| \ | /
333 <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
334 <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
335 <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
336
337
338 This shows that the current tracer is "irqsoff" tracing the time
339 for which interrupts were disabled. It gives the trace version
340 and the version of the kernel upon which this was executed on
341 (2.6.26-rc8). Then it displays the max latency in microsecs (97
342 us). The number of trace entries displayed and the total number
343 recorded (both are three: #3/3). The type of preemption that was
344 used (PREEMPT). VP, KP, SP, and HP are always zero and are
345 reserved for later use. #P is the number of online CPUS (#P:2).
346
347 The task is the process that was running when the latency
348 occurred. (swapper pid: 0).
349
350 The start and stop (the functions in which the interrupts were
351 disabled and enabled respectively) that caused the latencies:
352
353 apic_timer_interrupt is where the interrupts were disabled.
354 do_softirq is where they were enabled again.
355
356 The next lines after the header are the trace itself. The header
357 explains which is which.
358
359 cmd: The name of the process in the trace.
360
361 pid: The PID of that process.
362
363 CPU#: The CPU which the process was running on.
364
365 irqs-off: 'd' interrupts are disabled. '.' otherwise.
366 Note: If the architecture does not support a way to
367 read the irq flags variable, an 'X' will always
368 be printed here.
369
370 need-resched: 'N' task need_resched is set, '.' otherwise.
371
372 hardirq/softirq:
373 'H' - hard irq occurred inside a softirq.
374 'h' - hard irq is running
375 's' - soft irq is running
376 '.' - normal context.
377
378 preempt-depth: The level of preempt_disabled
379
380 The above is mostly meaningful for kernel developers.
381
382 time: When the latency-format option is enabled, the trace file
383 output includes a timestamp relative to the start of the
384 trace. This differs from the output when latency-format
385 is disabled, which includes an absolute timestamp.
386
387 delay: This is just to help catch your eye a bit better. And
388 needs to be fixed to be only relative to the same CPU.
389 The marks are determined by the difference between this
390 current trace and the next trace.
391 '!' - greater than preempt_mark_thresh (default 100)
392 '+' - greater than 1 microsecond
393 ' ' - less than or equal to 1 microsecond.
394
395 The rest is the same as the 'trace' file.
396
397
398 trace_options
399 -------------
400
401 The trace_options file is used to control what gets printed in
402 the trace output. To see what is available, simply cat the file:
403
404 cat trace_options
405 print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
406 noblock nostacktrace nosched-tree nouserstacktrace nosym-userobj
407
408 To disable one of the options, echo in the option prepended with
409 "no".
410
411 echo noprint-parent > trace_options
412
413 To enable an option, leave off the "no".
414
415 echo sym-offset > trace_options
416
417 Here are the available options:
418
419 print-parent - On function traces, display the calling (parent)
420 function as well as the function being traced.
421
422 print-parent:
423 bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul
424
425 noprint-parent:
426 bash-4000 [01] 1477.606694: simple_strtoul
427
428
429 sym-offset - Display not only the function name, but also the
430 offset in the function. For example, instead of
431 seeing just "ktime_get", you will see
432 "ktime_get+0xb/0x20".
433
434 sym-offset:
435 bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
436
437 sym-addr - this will also display the function address as well
438 as the function name.
439
440 sym-addr:
441 bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
442
443 verbose - This deals with the trace file when the
444 latency-format option is enabled.
445
446 bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
447 (+0.000ms): simple_strtoul (strict_strtoul)
448
449 raw - This will display raw numbers. This option is best for
450 use with user applications that can translate the raw
451 numbers better than having it done in the kernel.
452
453 hex - Similar to raw, but the numbers will be in a hexadecimal
454 format.
455
456 bin - This will print out the formats in raw binary.
457
458 block - TBD (needs update)
459
460 stacktrace - This is one of the options that changes the trace
461 itself. When a trace is recorded, so is the stack
462 of functions. This allows for back traces of
463 trace sites.
464
465 userstacktrace - This option changes the trace. It records a
466 stacktrace of the current userspace thread.
467
468 sym-userobj - when user stacktrace are enabled, look up which
469 object the address belongs to, and print a
470 relative address. This is especially useful when
471 ASLR is on, otherwise you don't get a chance to
472 resolve the address to object/file/line after
473 the app is no longer running
474
475 The lookup is performed when you read
476 trace,trace_pipe. Example:
477
478 a.out-1623 [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
479 x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
480
481 sched-tree - trace all tasks that are on the runqueue, at
482 every scheduling event. Will add overhead if
483 there's a lot of tasks running at once.
484
485 latency-format - This option changes the trace. When
486 it is enabled, the trace displays
487 additional information about the
488 latencies, as described in "Latency
489 trace format".
490
491 sched_switch
492 ------------
493
494 This tracer simply records schedule switches. Here is an example
495 of how to use it.
496
497 # echo sched_switch > current_tracer
498 # echo 1 > tracing_enabled
499 # sleep 1
500 # echo 0 > tracing_enabled
501 # cat trace
502
503 # tracer: sched_switch
504 #
505 # TASK-PID CPU# TIMESTAMP FUNCTION
506 # | | | | |
507 bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
508 bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
509 sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
510 bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
511 bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
512 sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
513 bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
514 bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
515 <idle>-0 [00] 240.132589: 0:140:R + 4:115:S
516 <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R
517 ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
518 <idle>-0 [00] 240.132598: 0:140:R + 4:115:S
519 <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R
520 ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
521 sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
522 [...]
523
524
525 As we have discussed previously about this format, the header
526 shows the name of the trace and points to the options. The
527 "FUNCTION" is a misnomer since here it represents the wake ups
528 and context switches.
529
530 The sched_switch file only lists the wake ups (represented with
531 '+') and context switches ('==>') with the previous task or
532 current task first followed by the next task or task waking up.
533 The format for both of these is PID:KERNEL-PRIO:TASK-STATE.
534 Remember that the KERNEL-PRIO is the inverse of the actual
535 priority with zero (0) being the highest priority and the nice
536 values starting at 100 (nice -20). Below is a quick chart to map
537 the kernel priority to user land priorities.
538
539 Kernel Space User Space
540 ===============================================================
541 0(high) to 98(low) user RT priority 99(high) to 1(low)
542 with SCHED_RR or SCHED_FIFO
543 ---------------------------------------------------------------
544 99 sched_priority is not used in scheduling
545 decisions(it must be specified as 0)
546 ---------------------------------------------------------------
547 100(high) to 139(low) user nice -20(high) to 19(low)
548 ---------------------------------------------------------------
549 140 idle task priority
550 ---------------------------------------------------------------
551
552 The task states are:
553
554 R - running : wants to run, may not actually be running
555 S - sleep : process is waiting to be woken up (handles signals)
556 D - disk sleep (uninterruptible sleep) : process must be woken up
557 (ignores signals)
558 T - stopped : process suspended
559 t - traced : process is being traced (with something like gdb)
560 Z - zombie : process waiting to be cleaned up
561 X - unknown
562
563
564 ftrace_enabled
565 --------------
566
567 The following tracers (listed below) give different output
568 depending on whether or not the sysctl ftrace_enabled is set. To
569 set ftrace_enabled, one can either use the sysctl function or
570 set it via the proc file system interface.
571
572 sysctl kernel.ftrace_enabled=1
573
574 or
575
576 echo 1 > /proc/sys/kernel/ftrace_enabled
577
578 To disable ftrace_enabled simply replace the '1' with '0' in the
579 above commands.
580
581 When ftrace_enabled is set the tracers will also record the
582 functions that are within the trace. The descriptions of the
583 tracers will also show an example with ftrace enabled.
584
585
586 irqsoff
587 -------
588
589 When interrupts are disabled, the CPU can not react to any other
590 external event (besides NMIs and SMIs). This prevents the timer
591 interrupt from triggering or the mouse interrupt from letting
592 the kernel know of a new mouse event. The result is a latency
593 with the reaction time.
594
595 The irqsoff tracer tracks the time for which interrupts are
596 disabled. When a new maximum latency is hit, the tracer saves
597 the trace leading up to that latency point so that every time a
598 new maximum is reached, the old saved trace is discarded and the
599 new trace is saved.
600
601 To reset the maximum, echo 0 into tracing_max_latency. Here is
602 an example:
603
604 # echo irqsoff > current_tracer
605 # echo latency-format > trace_options
606 # echo 0 > tracing_max_latency
607 # echo 1 > tracing_enabled
608 # ls -ltr
609 [...]
610 # echo 0 > tracing_enabled
611 # cat trace
612 # tracer: irqsoff
613 #
614 irqsoff latency trace v1.1.5 on 2.6.26
615 --------------------------------------------------------------------
616 latency: 12 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
617 -----------------
618 | task: bash-3730 (uid:0 nice:0 policy:0 rt_prio:0)
619 -----------------
620 => started at: sys_setpgid
621 => ended at: sys_setpgid
622
623 # _------=> CPU#
624 # / _-----=> irqs-off
625 # | / _----=> need-resched
626 # || / _---=> hardirq/softirq
627 # ||| / _--=> preempt-depth
628 # |||| /
629 # ||||| delay
630 # cmd pid ||||| time | caller
631 # \ / ||||| \ | /
632 bash-3730 1d... 0us : _write_lock_irq (sys_setpgid)
633 bash-3730 1d..1 1us+: _write_unlock_irq (sys_setpgid)
634 bash-3730 1d..2 14us : trace_hardirqs_on (sys_setpgid)
635
636
637 Here we see that that we had a latency of 12 microsecs (which is
638 very good). The _write_lock_irq in sys_setpgid disabled
639 interrupts. The difference between the 12 and the displayed
640 timestamp 14us occurred because the clock was incremented
641 between the time of recording the max latency and the time of
642 recording the function that had that latency.
643
644 Note the above example had ftrace_enabled not set. If we set the
645 ftrace_enabled, we get a much larger output:
646
647 # tracer: irqsoff
648 #
649 irqsoff latency trace v1.1.5 on 2.6.26-rc8
650 --------------------------------------------------------------------
651 latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
652 -----------------
653 | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0)
654 -----------------
655 => started at: __alloc_pages_internal
656 => ended at: __alloc_pages_internal
657
658 # _------=> CPU#
659 # / _-----=> irqs-off
660 # | / _----=> need-resched
661 # || / _---=> hardirq/softirq
662 # ||| / _--=> preempt-depth
663 # |||| /
664 # ||||| delay
665 # cmd pid ||||| time | caller
666 # \ / ||||| \ | /
667 ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal)
668 ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist)
669 ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk)
670 ls-4339 0d..1 4us : add_preempt_count (_spin_lock)
671 ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk)
672 ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue)
673 ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest)
674 ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk)
675 ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue)
676 ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest)
677 ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk)
678 ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue)
679 [...]
680 ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue)
681 ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest)
682 ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk)
683 ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue)
684 ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest)
685 ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk)
686 ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock)
687 ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal)
688 ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal)
689
690
691
692 Here we traced a 50 microsecond latency. But we also see all the
693 functions that were called during that time. Note that by
694 enabling function tracing, we incur an added overhead. This
695 overhead may extend the latency times. But nevertheless, this
696 trace has provided some very helpful debugging information.
697
698
699 preemptoff
700 ----------
701
702 When preemption is disabled, we may be able to receive
703 interrupts but the task cannot be preempted and a higher
704 priority task must wait for preemption to be enabled again
705 before it can preempt a lower priority task.
706
707 The preemptoff tracer traces the places that disable preemption.
708 Like the irqsoff tracer, it records the maximum latency for
709 which preemption was disabled. The control of preemptoff tracer
710 is much like the irqsoff tracer.
711
712 # echo preemptoff > current_tracer
713 # echo latency-format > trace_options
714 # echo 0 > tracing_max_latency
715 # echo 1 > tracing_enabled
716 # ls -ltr
717 [...]
718 # echo 0 > tracing_enabled
719 # cat trace
720 # tracer: preemptoff
721 #
722 preemptoff latency trace v1.1.5 on 2.6.26-rc8
723 --------------------------------------------------------------------
724 latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
725 -----------------
726 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
727 -----------------
728 => started at: do_IRQ
729 => ended at: __do_softirq
730
731 # _------=> CPU#
732 # / _-----=> irqs-off
733 # | / _----=> need-resched
734 # || / _---=> hardirq/softirq
735 # ||| / _--=> preempt-depth
736 # |||| /
737 # ||||| delay
738 # cmd pid ||||| time | caller
739 # \ / ||||| \ | /
740 sshd-4261 0d.h. 0us+: irq_enter (do_IRQ)
741 sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq)
742 sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq)
743
744
745 This has some more changes. Preemption was disabled when an
746 interrupt came in (notice the 'h'), and was enabled while doing
747 a softirq. (notice the 's'). But we also see that interrupts
748 have been disabled when entering the preempt off section and
749 leaving it (the 'd'). We do not know if interrupts were enabled
750 in the mean time.
751
752 # tracer: preemptoff
753 #
754 preemptoff latency trace v1.1.5 on 2.6.26-rc8
755 --------------------------------------------------------------------
756 latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
757 -----------------
758 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
759 -----------------
760 => started at: remove_wait_queue
761 => ended at: __do_softirq
762
763 # _------=> CPU#
764 # / _-----=> irqs-off
765 # | / _----=> need-resched
766 # || / _---=> hardirq/softirq
767 # ||| / _--=> preempt-depth
768 # |||| /
769 # ||||| delay
770 # cmd pid ||||| time | caller
771 # \ / ||||| \ | /
772 sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue)
773 sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue)
774 sshd-4261 0d..1 2us : do_IRQ (common_interrupt)
775 sshd-4261 0d..1 2us : irq_enter (do_IRQ)
776 sshd-4261 0d..1 2us : idle_cpu (irq_enter)
777 sshd-4261 0d..1 3us : add_preempt_count (irq_enter)
778 sshd-4261 0d.h1 3us : idle_cpu (irq_enter)
779 sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ)
780 [...]
781 sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock)
782 sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq)
783 sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq)
784 sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq)
785 sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock)
786 sshd-4261 0d.h1 14us : irq_exit (do_IRQ)
787 sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit)
788 sshd-4261 0d..2 15us : do_softirq (irq_exit)
789 sshd-4261 0d... 15us : __do_softirq (do_softirq)
790 sshd-4261 0d... 16us : __local_bh_disable (__do_softirq)
791 sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable)
792 sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable)
793 sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable)
794 sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable)
795 [...]
796 sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable)
797 sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable)
798 sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable)
799 sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable)
800 sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip)
801 sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip)
802 sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable)
803 sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable)
804 [...]
805 sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq)
806 sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq)
807
808
809 The above is an example of the preemptoff trace with
810 ftrace_enabled set. Here we see that interrupts were disabled
811 the entire time. The irq_enter code lets us know that we entered
812 an interrupt 'h'. Before that, the functions being traced still
813 show that it is not in an interrupt, but we can see from the
814 functions themselves that this is not the case.
815
816 Notice that __do_softirq when called does not have a
817 preempt_count. It may seem that we missed a preempt enabling.
818 What really happened is that the preempt count is held on the
819 thread's stack and we switched to the softirq stack (4K stacks
820 in effect). The code does not copy the preempt count, but
821 because interrupts are disabled, we do not need to worry about
822 it. Having a tracer like this is good for letting people know
823 what really happens inside the kernel.
824
825
826 preemptirqsoff
827 --------------
828
829 Knowing the locations that have interrupts disabled or
830 preemption disabled for the longest times is helpful. But
831 sometimes we would like to know when either preemption and/or
832 interrupts are disabled.
833
834 Consider the following code:
835
836 local_irq_disable();
837 call_function_with_irqs_off();
838 preempt_disable();
839 call_function_with_irqs_and_preemption_off();
840 local_irq_enable();
841 call_function_with_preemption_off();
842 preempt_enable();
843
844 The irqsoff tracer will record the total length of
845 call_function_with_irqs_off() and
846 call_function_with_irqs_and_preemption_off().
847
848 The preemptoff tracer will record the total length of
849 call_function_with_irqs_and_preemption_off() and
850 call_function_with_preemption_off().
851
852 But neither will trace the time that interrupts and/or
853 preemption is disabled. This total time is the time that we can
854 not schedule. To record this time, use the preemptirqsoff
855 tracer.
856
857 Again, using this trace is much like the irqsoff and preemptoff
858 tracers.
859
860 # echo preemptirqsoff > current_tracer
861 # echo latency-format > trace_options
862 # echo 0 > tracing_max_latency
863 # echo 1 > tracing_enabled
864 # ls -ltr
865 [...]
866 # echo 0 > tracing_enabled
867 # cat trace
868 # tracer: preemptirqsoff
869 #
870 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
871 --------------------------------------------------------------------
872 latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
873 -----------------
874 | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0)
875 -----------------
876 => started at: apic_timer_interrupt
877 => ended at: __do_softirq
878
879 # _------=> CPU#
880 # / _-----=> irqs-off
881 # | / _----=> need-resched
882 # || / _---=> hardirq/softirq
883 # ||| / _--=> preempt-depth
884 # |||| /
885 # ||||| delay
886 # cmd pid ||||| time | caller
887 # \ / ||||| \ | /
888 ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt)
889 ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq)
890 ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq)
891
892
893
894 The trace_hardirqs_off_thunk is called from assembly on x86 when
895 interrupts are disabled in the assembly code. Without the
896 function tracing, we do not know if interrupts were enabled
897 within the preemption points. We do see that it started with
898 preemption enabled.
899
900 Here is a trace with ftrace_enabled set:
901
902
903 # tracer: preemptirqsoff
904 #
905 preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
906 --------------------------------------------------------------------
907 latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
908 -----------------
909 | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
910 -----------------
911 => started at: write_chan
912 => ended at: __do_softirq
913
914 # _------=> CPU#
915 # / _-----=> irqs-off
916 # | / _----=> need-resched
917 # || / _---=> hardirq/softirq
918 # ||| / _--=> preempt-depth
919 # |||| /
920 # ||||| delay
921 # cmd pid ||||| time | caller
922 # \ / ||||| \ | /
923 ls-4473 0.N.. 0us : preempt_schedule (write_chan)
924 ls-4473 0dN.1 1us : _spin_lock (schedule)
925 ls-4473 0dN.1 2us : add_preempt_count (_spin_lock)
926 ls-4473 0d..2 2us : put_prev_task_fair (schedule)
927 [...]
928 ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts)
929 ls-4473 0d..2 13us : __switch_to (schedule)
930 sshd-4261 0d..2 14us : finish_task_switch (schedule)
931 sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch)
932 sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave)
933 sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set)
934 sshd-4261 0d..2 16us : do_IRQ (common_interrupt)
935 sshd-4261 0d..2 17us : irq_enter (do_IRQ)
936 sshd-4261 0d..2 17us : idle_cpu (irq_enter)
937 sshd-4261 0d..2 18us : add_preempt_count (irq_enter)
938 sshd-4261 0d.h2 18us : idle_cpu (irq_enter)
939 sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ)
940 sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq)
941 sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock)
942 sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq)
943 sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock)
944 [...]
945 sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq)
946 sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock)
947 sshd-4261 0d.h2 29us : irq_exit (do_IRQ)
948 sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit)
949 sshd-4261 0d..3 30us : do_softirq (irq_exit)
950 sshd-4261 0d... 30us : __do_softirq (do_softirq)
951 sshd-4261 0d... 31us : __local_bh_disable (__do_softirq)
952 sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable)
953 sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable)
954 [...]
955 sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip)
956 sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip)
957 sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt)
958 sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt)
959 sshd-4261 0d.s3 45us : idle_cpu (irq_enter)
960 sshd-4261 0d.s3 46us : add_preempt_count (irq_enter)
961 sshd-4261 0d.H3 46us : idle_cpu (irq_enter)
962 sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt)
963 sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt)
964 [...]
965 sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt)
966 sshd-4261 0d.H3 82us : ktime_get (tick_program_event)
967 sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get)
968 sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts)
969 sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts)
970 sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event)
971 sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event)
972 sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt)
973 sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit)
974 sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit)
975 sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable)
976 [...]
977 sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action)
978 sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq)
979 sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq)
980 sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq)
981 sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable)
982 sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq)
983 sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq)
984
985
986 This is a very interesting trace. It started with the preemption
987 of the ls task. We see that the task had the "need_resched" bit
988 set via the 'N' in the trace. Interrupts were disabled before
989 the spin_lock at the beginning of the trace. We see that a
990 schedule took place to run sshd. When the interrupts were
991 enabled, we took an interrupt. On return from the interrupt
992 handler, the softirq ran. We took another interrupt while
993 running the softirq as we see from the capital 'H'.
994
995
996 wakeup
997 ------
998
999 In a Real-Time environment it is very important to know the
1000 wakeup time it takes for the highest priority task that is woken
1001 up to the time that it executes. This is also known as "schedule
1002 latency". I stress the point that this is about RT tasks. It is
1003 also important to know the scheduling latency of non-RT tasks,
1004 but the average schedule latency is better for non-RT tasks.
1005 Tools like LatencyTop are more appropriate for such
1006 measurements.
1007
1008 Real-Time environments are interested in the worst case latency.
1009 That is the longest latency it takes for something to happen,
1010 and not the average. We can have a very fast scheduler that may
1011 only have a large latency once in a while, but that would not
1012 work well with Real-Time tasks. The wakeup tracer was designed
1013 to record the worst case wakeups of RT tasks. Non-RT tasks are
1014 not recorded because the tracer only records one worst case and
1015 tracing non-RT tasks that are unpredictable will overwrite the
1016 worst case latency of RT tasks.
1017
1018 Since this tracer only deals with RT tasks, we will run this
1019 slightly differently than we did with the previous tracers.
1020 Instead of performing an 'ls', we will run 'sleep 1' under
1021 'chrt' which changes the priority of the task.
1022
1023 # echo wakeup > current_tracer
1024 # echo latency-format > trace_options
1025 # echo 0 > tracing_max_latency
1026 # echo 1 > tracing_enabled
1027 # chrt -f 5 sleep 1
1028 # echo 0 > tracing_enabled
1029 # cat trace
1030 # tracer: wakeup
1031 #
1032 wakeup latency trace v1.1.5 on 2.6.26-rc8
1033 --------------------------------------------------------------------
1034 latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
1035 -----------------
1036 | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5)
1037 -----------------
1038
1039 # _------=> CPU#
1040 # / _-----=> irqs-off
1041 # | / _----=> need-resched
1042 # || / _---=> hardirq/softirq
1043 # ||| / _--=> preempt-depth
1044 # |||| /
1045 # ||||| delay
1046 # cmd pid ||||| time | caller
1047 # \ / ||||| \ | /
1048 <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process)
1049 <idle>-0 1d..4 4us : schedule (cpu_idle)
1050
1051
1052 Running this on an idle system, we see that it only took 4
1053 microseconds to perform the task switch. Note, since the trace
1054 marker in the schedule is before the actual "switch", we stop
1055 the tracing when the recorded task is about to schedule in. This
1056 may change if we add a new marker at the end of the scheduler.
1057
1058 Notice that the recorded task is 'sleep' with the PID of 4901
1059 and it has an rt_prio of 5. This priority is user-space priority
1060 and not the internal kernel priority. The policy is 1 for
1061 SCHED_FIFO and 2 for SCHED_RR.
1062
1063 Doing the same with chrt -r 5 and ftrace_enabled set.
1064
1065 # tracer: wakeup
1066 #
1067 wakeup latency trace v1.1.5 on 2.6.26-rc8
1068 --------------------------------------------------------------------
1069 latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
1070 -----------------
1071 | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5)
1072 -----------------
1073
1074 # _------=> CPU#
1075 # / _-----=> irqs-off
1076 # | / _----=> need-resched
1077 # || / _---=> hardirq/softirq
1078 # ||| / _--=> preempt-depth
1079 # |||| /
1080 # ||||| delay
1081 # cmd pid ||||| time | caller
1082 # \ / ||||| \ | /
1083 ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process)
1084 ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb)
1085 ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up)
1086 ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup)
1087 ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr)
1088 ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup)
1089 ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up)
1090 ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up)
1091 [...]
1092 ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt)
1093 ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit)
1094 ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit)
1095 ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq)
1096 [...]
1097 ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks)
1098 ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq)
1099 ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable)
1100 ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd)
1101 ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd)
1102 ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched)
1103 ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched)
1104 ksoftirq-7 1.N.2 33us : schedule (__cond_resched)
1105 ksoftirq-7 1.N.2 33us : add_preempt_count (schedule)
1106 ksoftirq-7 1.N.3 34us : hrtick_clear (schedule)
1107 ksoftirq-7 1dN.3 35us : _spin_lock (schedule)
1108 ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock)
1109 ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule)
1110 ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair)
1111 [...]
1112 ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline)
1113 ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock)
1114 ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline)
1115 ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock)
1116 ksoftirq-7 1d..4 50us : schedule (__cond_resched)
1117
1118 The interrupt went off while running ksoftirqd. This task runs
1119 at SCHED_OTHER. Why did not we see the 'N' set early? This may
1120 be a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K
1121 stacks configured, the interrupt and softirq run with their own
1122 stack. Some information is held on the top of the task's stack
1123 (need_resched and preempt_count are both stored there). The
1124 setting of the NEED_RESCHED bit is done directly to the task's
1125 stack, but the reading of the NEED_RESCHED is done by looking at
1126 the current stack, which in this case is the stack for the hard
1127 interrupt. This hides the fact that NEED_RESCHED has been set.
1128 We do not see the 'N' until we switch back to the task's
1129 assigned stack.
1130
1131 function
1132 --------
1133
1134 This tracer is the function tracer. Enabling the function tracer
1135 can be done from the debug file system. Make sure the
1136 ftrace_enabled is set; otherwise this tracer is a nop.
1137
1138 # sysctl kernel.ftrace_enabled=1
1139 # echo function > current_tracer
1140 # echo 1 > tracing_enabled
1141 # usleep 1
1142 # echo 0 > tracing_enabled
1143 # cat trace
1144 # tracer: function
1145 #
1146 # TASK-PID CPU# TIMESTAMP FUNCTION
1147 # | | | | |
1148 bash-4003 [00] 123.638713: finish_task_switch <-schedule
1149 bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch
1150 bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq
1151 bash-4003 [00] 123.638715: hrtick_set <-schedule
1152 bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set
1153 bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave
1154 bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set
1155 bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore
1156 bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set
1157 bash-4003 [00] 123.638718: sub_preempt_count <-schedule
1158 bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule
1159 bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run
1160 bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion
1161 bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common
1162 bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq
1163 [...]
1164
1165
1166 Note: function tracer uses ring buffers to store the above
1167 entries. The newest data may overwrite the oldest data.
1168 Sometimes using echo to stop the trace is not sufficient because
1169 the tracing could have overwritten the data that you wanted to
1170 record. For this reason, it is sometimes better to disable
1171 tracing directly from a program. This allows you to stop the
1172 tracing at the point that you hit the part that you are
1173 interested in. To disable the tracing directly from a C program,
1174 something like following code snippet can be used:
1175
1176 int trace_fd;
1177 [...]
1178 int main(int argc, char *argv[]) {
1179 [...]
1180 trace_fd = open(tracing_file("tracing_enabled"), O_WRONLY);
1181 [...]
1182 if (condition_hit()) {
1183 write(trace_fd, "0", 1);
1184 }
1185 [...]
1186 }
1187
1188
1189 Single thread tracing
1190 ---------------------
1191
1192 By writing into set_ftrace_pid you can trace a
1193 single thread. For example:
1194
1195 # cat set_ftrace_pid
1196 no pid
1197 # echo 3111 > set_ftrace_pid
1198 # cat set_ftrace_pid
1199 3111
1200 # echo function > current_tracer
1201 # cat trace | head
1202 # tracer: function
1203 #
1204 # TASK-PID CPU# TIMESTAMP FUNCTION
1205 # | | | | |
1206 yum-updatesd-3111 [003] 1637.254676: finish_task_switch <-thread_return
1207 yum-updatesd-3111 [003] 1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
1208 yum-updatesd-3111 [003] 1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
1209 yum-updatesd-3111 [003] 1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
1210 yum-updatesd-3111 [003] 1637.254685: fget_light <-do_sys_poll
1211 yum-updatesd-3111 [003] 1637.254686: pipe_poll <-do_sys_poll
1212 # echo -1 > set_ftrace_pid
1213 # cat trace |head
1214 # tracer: function
1215 #
1216 # TASK-PID CPU# TIMESTAMP FUNCTION
1217 # | | | | |
1218 ##### CPU 3 buffer started ####
1219 yum-updatesd-3111 [003] 1701.957688: free_poll_entry <-poll_freewait
1220 yum-updatesd-3111 [003] 1701.957689: remove_wait_queue <-free_poll_entry
1221 yum-updatesd-3111 [003] 1701.957691: fput <-free_poll_entry
1222 yum-updatesd-3111 [003] 1701.957692: audit_syscall_exit <-sysret_audit
1223 yum-updatesd-3111 [003] 1701.957693: path_put <-audit_syscall_exit
1224
1225 If you want to trace a function when executing, you could use
1226 something like this simple program:
1227
1228 #include <stdio.h>
1229 #include <stdlib.h>
1230 #include <sys/types.h>
1231 #include <sys/stat.h>
1232 #include <fcntl.h>
1233 #include <unistd.h>
1234 #include <string.h>
1235
1236 #define _STR(x) #x
1237 #define STR(x) _STR(x)
1238 #define MAX_PATH 256
1239
1240 const char *find_debugfs(void)
1241 {
1242 static char debugfs[MAX_PATH+1];
1243 static int debugfs_found;
1244 char type[100];
1245 FILE *fp;
1246
1247 if (debugfs_found)
1248 return debugfs;
1249
1250 if ((fp = fopen("/proc/mounts","r")) == NULL) {
1251 perror("/proc/mounts");
1252 return NULL;
1253 }
1254
1255 while (fscanf(fp, "%*s %"
1256 STR(MAX_PATH)
1257 "s %99s %*s %*d %*d\n",
1258 debugfs, type) == 2) {
1259 if (strcmp(type, "debugfs") == 0)
1260 break;
1261 }
1262 fclose(fp);
1263
1264 if (strcmp(type, "debugfs") != 0) {
1265 fprintf(stderr, "debugfs not mounted");
1266 return NULL;
1267 }
1268
1269 strcat(debugfs, "/tracing/");
1270 debugfs_found = 1;
1271
1272 return debugfs;
1273 }
1274
1275 const char *tracing_file(const char *file_name)
1276 {
1277 static char trace_file[MAX_PATH+1];
1278 snprintf(trace_file, MAX_PATH, "%s/%s", find_debugfs(), file_name);
1279 return trace_file;
1280 }
1281
1282 int main (int argc, char **argv)
1283 {
1284 if (argc < 1)
1285 exit(-1);
1286
1287 if (fork() > 0) {
1288 int fd, ffd;
1289 char line[64];
1290 int s;
1291
1292 ffd = open(tracing_file("current_tracer"), O_WRONLY);
1293 if (ffd < 0)
1294 exit(-1);
1295 write(ffd, "nop", 3);
1296
1297 fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
1298 s = sprintf(line, "%d\n", getpid());
1299 write(fd, line, s);
1300
1301 write(ffd, "function", 8);
1302
1303 close(fd);
1304 close(ffd);
1305
1306 execvp(argv[1], argv+1);
1307 }
1308
1309 return 0;
1310 }
1311
1312
1313 hw-branch-tracer (x86 only)
1314 ---------------------------
1315
1316 This tracer uses the x86 last branch tracing hardware feature to
1317 collect a branch trace on all cpus with relatively low overhead.
1318
1319 The tracer uses a fixed-size circular buffer per cpu and only
1320 traces ring 0 branches. The trace file dumps that buffer in the
1321 following format:
1322
1323 # tracer: hw-branch-tracer
1324 #
1325 # CPU# TO <- FROM
1326 0 scheduler_tick+0xb5/0x1bf <- task_tick_idle+0x5/0x6
1327 2 run_posix_cpu_timers+0x2b/0x72a <- run_posix_cpu_timers+0x25/0x72a
1328 0 scheduler_tick+0x139/0x1bf <- scheduler_tick+0xed/0x1bf
1329 0 scheduler_tick+0x17c/0x1bf <- scheduler_tick+0x148/0x1bf
1330 2 run_posix_cpu_timers+0x9e/0x72a <- run_posix_cpu_timers+0x5e/0x72a
1331 0 scheduler_tick+0x1b6/0x1bf <- scheduler_tick+0x1aa/0x1bf
1332
1333
1334 The tracer may be used to dump the trace for the oops'ing cpu on
1335 a kernel oops into the system log. To enable this,
1336 ftrace_dump_on_oops must be set. To set ftrace_dump_on_oops, one
1337 can either use the sysctl function or set it via the proc system
1338 interface.
1339
1340 sysctl kernel.ftrace_dump_on_oops=n
1341
1342 or
1343
1344 echo n > /proc/sys/kernel/ftrace_dump_on_oops
1345
1346 If n = 1, ftrace will dump buffers of all CPUs, if n = 2 ftrace will
1347 only dump the buffer of the CPU that triggered the oops.
1348
1349 Here's an example of such a dump after a null pointer
1350 dereference in a kernel module:
1351
1352 [57848.105921] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
1353 [57848.106019] IP: [<ffffffffa0000006>] open+0x6/0x14 [oops]
1354 [57848.106019] PGD 2354e9067 PUD 2375e7067 PMD 0
1355 [57848.106019] Oops: 0002 [#1] SMP
1356 [57848.106019] last sysfs file: /sys/devices/pci0000:00/0000:00:1e.0/0000:20:05.0/local_cpus
1357 [57848.106019] Dumping ftrace buffer:
1358 [57848.106019] ---------------------------------
1359 [...]
1360 [57848.106019] 0 chrdev_open+0xe6/0x165 <- cdev_put+0x23/0x24
1361 [57848.106019] 0 chrdev_open+0x117/0x165 <- chrdev_open+0xfa/0x165
1362 [57848.106019] 0 chrdev_open+0x120/0x165 <- chrdev_open+0x11c/0x165
1363 [57848.106019] 0 chrdev_open+0x134/0x165 <- chrdev_open+0x12b/0x165
1364 [57848.106019] 0 open+0x0/0x14 [oops] <- chrdev_open+0x144/0x165
1365 [57848.106019] 0 page_fault+0x0/0x30 <- open+0x6/0x14 [oops]
1366 [57848.106019] 0 error_entry+0x0/0x5b <- page_fault+0x4/0x30
1367 [57848.106019] 0 error_kernelspace+0x0/0x31 <- error_entry+0x59/0x5b
1368 [57848.106019] 0 error_sti+0x0/0x1 <- error_kernelspace+0x2d/0x31
1369 [57848.106019] 0 page_fault+0x9/0x30 <- error_sti+0x0/0x1
1370 [57848.106019] 0 do_page_fault+0x0/0x881 <- page_fault+0x1a/0x30
1371 [...]
1372 [57848.106019] 0 do_page_fault+0x66b/0x881 <- is_prefetch+0x1ee/0x1f2
1373 [57848.106019] 0 do_page_fault+0x6e0/0x881 <- do_page_fault+0x67a/0x881
1374 [57848.106019] 0 oops_begin+0x0/0x96 <- do_page_fault+0x6e0/0x881
1375 [57848.106019] 0 trace_hw_branch_oops+0x0/0x2d <- oops_begin+0x9/0x96
1376 [...]
1377 [57848.106019] 0 ds_suspend_bts+0x2a/0xe3 <- ds_suspend_bts+0x1a/0xe3
1378 [57848.106019] ---------------------------------
1379 [57848.106019] CPU 0
1380 [57848.106019] Modules linked in: oops
1381 [57848.106019] Pid: 5542, comm: cat Tainted: G W 2.6.28 #23
1382 [57848.106019] RIP: 0010:[<ffffffffa0000006>] [<ffffffffa0000006>] open+0x6/0x14 [oops]
1383 [57848.106019] RSP: 0018:ffff880235457d48 EFLAGS: 00010246
1384 [...]
1385
1386
1387 function graph tracer
1388 ---------------------------
1389
1390 This tracer is similar to the function tracer except that it
1391 probes a function on its entry and its exit. This is done by
1392 using a dynamically allocated stack of return addresses in each
1393 task_struct. On function entry the tracer overwrites the return
1394 address of each function traced to set a custom probe. Thus the
1395 original return address is stored on the stack of return address
1396 in the task_struct.
1397
1398 Probing on both ends of a function leads to special features
1399 such as:
1400
1401 - measure of a function's time execution
1402 - having a reliable call stack to draw function calls graph
1403
1404 This tracer is useful in several situations:
1405
1406 - you want to find the reason of a strange kernel behavior and
1407 need to see what happens in detail on any areas (or specific
1408 ones).
1409
1410 - you are experiencing weird latencies but it's difficult to
1411 find its origin.
1412
1413 - you want to find quickly which path is taken by a specific
1414 function
1415
1416 - you just want to peek inside a working kernel and want to see
1417 what happens there.
1418
1419 # tracer: function_graph
1420 #
1421 # CPU DURATION FUNCTION CALLS
1422 # | | | | | | |
1423
1424 0) | sys_open() {
1425 0) | do_sys_open() {
1426 0) | getname() {
1427 0) | kmem_cache_alloc() {
1428 0) 1.382 us | __might_sleep();
1429 0) 2.478 us | }
1430 0) | strncpy_from_user() {
1431 0) | might_fault() {
1432 0) 1.389 us | __might_sleep();
1433 0) 2.553 us | }
1434 0) 3.807 us | }
1435 0) 7.876 us | }
1436 0) | alloc_fd() {
1437 0) 0.668 us | _spin_lock();
1438 0) 0.570 us | expand_files();
1439 0) 0.586 us | _spin_unlock();
1440
1441
1442 There are several columns that can be dynamically
1443 enabled/disabled. You can use every combination of options you
1444 want, depending on your needs.
1445
1446 - The cpu number on which the function executed is default
1447 enabled. It is sometimes better to only trace one cpu (see
1448 tracing_cpu_mask file) or you might sometimes see unordered
1449 function calls while cpu tracing switch.
1450
1451 hide: echo nofuncgraph-cpu > trace_options
1452 show: echo funcgraph-cpu > trace_options
1453
1454 - The duration (function's time of execution) is displayed on
1455 the closing bracket line of a function or on the same line
1456 than the current function in case of a leaf one. It is default
1457 enabled.
1458
1459 hide: echo nofuncgraph-duration > trace_options
1460 show: echo funcgraph-duration > trace_options
1461
1462 - The overhead field precedes the duration field in case of
1463 reached duration thresholds.
1464
1465 hide: echo nofuncgraph-overhead > trace_options
1466 show: echo funcgraph-overhead > trace_options
1467 depends on: funcgraph-duration
1468
1469 ie:
1470
1471 0) | up_write() {
1472 0) 0.646 us | _spin_lock_irqsave();
1473 0) 0.684 us | _spin_unlock_irqrestore();
1474 0) 3.123 us | }
1475 0) 0.548 us | fput();
1476 0) + 58.628 us | }
1477
1478 [...]
1479
1480 0) | putname() {
1481 0) | kmem_cache_free() {
1482 0) 0.518 us | __phys_addr();
1483 0) 1.757 us | }
1484 0) 2.861 us | }
1485 0) ! 115.305 us | }
1486 0) ! 116.402 us | }
1487
1488 + means that the function exceeded 10 usecs.
1489 ! means that the function exceeded 100 usecs.
1490
1491
1492 - The task/pid field displays the thread cmdline and pid which
1493 executed the function. It is default disabled.
1494
1495 hide: echo nofuncgraph-proc > trace_options
1496 show: echo funcgraph-proc > trace_options
1497
1498 ie:
1499
1500 # tracer: function_graph
1501 #
1502 # CPU TASK/PID DURATION FUNCTION CALLS
1503 # | | | | | | | | |
1504 0) sh-4802 | | d_free() {
1505 0) sh-4802 | | call_rcu() {
1506 0) sh-4802 | | __call_rcu() {
1507 0) sh-4802 | 0.616 us | rcu_process_gp_end();
1508 0) sh-4802 | 0.586 us | check_for_new_grace_period();
1509 0) sh-4802 | 2.899 us | }
1510 0) sh-4802 | 4.040 us | }
1511 0) sh-4802 | 5.151 us | }
1512 0) sh-4802 | + 49.370 us | }
1513
1514
1515 - The absolute time field is an absolute timestamp given by the
1516 system clock since it started. A snapshot of this time is
1517 given on each entry/exit of functions
1518
1519 hide: echo nofuncgraph-abstime > trace_options
1520 show: echo funcgraph-abstime > trace_options
1521
1522 ie:
1523
1524 #
1525 # TIME CPU DURATION FUNCTION CALLS
1526 # | | | | | | | |
1527 360.774522 | 1) 0.541 us | }
1528 360.774522 | 1) 4.663 us | }
1529 360.774523 | 1) 0.541 us | __wake_up_bit();
1530 360.774524 | 1) 6.796 us | }
1531 360.774524 | 1) 7.952 us | }
1532 360.774525 | 1) 9.063 us | }
1533 360.774525 | 1) 0.615 us | journal_mark_dirty();
1534 360.774527 | 1) 0.578 us | __brelse();
1535 360.774528 | 1) | reiserfs_prepare_for_journal() {
1536 360.774528 | 1) | unlock_buffer() {
1537 360.774529 | 1) | wake_up_bit() {
1538 360.774529 | 1) | bit_waitqueue() {
1539 360.774530 | 1) 0.594 us | __phys_addr();
1540
1541
1542 You can put some comments on specific functions by using
1543 trace_printk() For example, if you want to put a comment inside
1544 the __might_sleep() function, you just have to include
1545 <linux/ftrace.h> and call trace_printk() inside __might_sleep()
1546
1547 trace_printk("I'm a comment!\n")
1548
1549 will produce:
1550
1551 1) | __might_sleep() {
1552 1) | /* I'm a comment! */
1553 1) 1.449 us | }
1554
1555
1556 You might find other useful features for this tracer in the
1557 following "dynamic ftrace" section such as tracing only specific
1558 functions or tasks.
1559
1560 dynamic ftrace
1561 --------------
1562
1563 If CONFIG_DYNAMIC_FTRACE is set, the system will run with
1564 virtually no overhead when function tracing is disabled. The way
1565 this works is the mcount function call (placed at the start of
1566 every kernel function, produced by the -pg switch in gcc),
1567 starts of pointing to a simple return. (Enabling FTRACE will
1568 include the -pg switch in the compiling of the kernel.)
1569
1570 At compile time every C file object is run through the
1571 recordmcount.pl script (located in the scripts directory). This
1572 script will process the C object using objdump to find all the
1573 locations in the .text section that call mcount. (Note, only the
1574 .text section is processed, since processing other sections like
1575 .init.text may cause races due to those sections being freed).
1576
1577 A new section called "__mcount_loc" is created that holds
1578 references to all the mcount call sites in the .text section.
1579 This section is compiled back into the original object. The
1580 final linker will add all these references into a single table.
1581
1582 On boot up, before SMP is initialized, the dynamic ftrace code
1583 scans this table and updates all the locations into nops. It
1584 also records the locations, which are added to the
1585 available_filter_functions list. Modules are processed as they
1586 are loaded and before they are executed. When a module is
1587 unloaded, it also removes its functions from the ftrace function
1588 list. This is automatic in the module unload code, and the
1589 module author does not need to worry about it.
1590
1591 When tracing is enabled, kstop_machine is called to prevent
1592 races with the CPUS executing code being modified (which can
1593 cause the CPU to do undesirable things), and the nops are
1594 patched back to calls. But this time, they do not call mcount
1595 (which is just a function stub). They now call into the ftrace
1596 infrastructure.
1597
1598 One special side-effect to the recording of the functions being
1599 traced is that we can now selectively choose which functions we
1600 wish to trace and which ones we want the mcount calls to remain
1601 as nops.
1602
1603 Two files are used, one for enabling and one for disabling the
1604 tracing of specified functions. They are:
1605
1606 set_ftrace_filter
1607
1608 and
1609
1610 set_ftrace_notrace
1611
1612 A list of available functions that you can add to these files is
1613 listed in:
1614
1615 available_filter_functions
1616
1617 # cat available_filter_functions
1618 put_prev_task_idle
1619 kmem_cache_create
1620 pick_next_task_rt
1621 get_online_cpus
1622 pick_next_task_fair
1623 mutex_lock
1624 [...]
1625
1626 If I am only interested in sys_nanosleep and hrtimer_interrupt:
1627
1628 # echo sys_nanosleep hrtimer_interrupt \
1629 > set_ftrace_filter
1630 # echo function > current_tracer
1631 # echo 1 > tracing_enabled
1632 # usleep 1
1633 # echo 0 > tracing_enabled
1634 # cat trace
1635 # tracer: ftrace
1636 #
1637 # TASK-PID CPU# TIMESTAMP FUNCTION
1638 # | | | | |
1639 usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt
1640 usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call
1641 <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt
1642
1643 To see which functions are being traced, you can cat the file:
1644
1645 # cat set_ftrace_filter
1646 hrtimer_interrupt
1647 sys_nanosleep
1648
1649
1650 Perhaps this is not enough. The filters also allow simple wild
1651 cards. Only the following are currently available
1652
1653 <match>* - will match functions that begin with <match>
1654 *<match> - will match functions that end with <match>
1655 *<match>* - will match functions that have <match> in it
1656
1657 These are the only wild cards which are supported.
1658
1659 <match>*<match> will not work.
1660
1661 Note: It is better to use quotes to enclose the wild cards,
1662 otherwise the shell may expand the parameters into names
1663 of files in the local directory.
1664
1665 # echo 'hrtimer_*' > set_ftrace_filter
1666
1667 Produces:
1668
1669 # tracer: ftrace
1670 #
1671 # TASK-PID CPU# TIMESTAMP FUNCTION
1672 # | | | | |
1673 bash-4003 [00] 1480.611794: hrtimer_init <-copy_process
1674 bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set
1675 bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear
1676 bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel
1677 <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt
1678 <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt
1679 <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt
1680 <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt
1681 <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt
1682
1683
1684 Notice that we lost the sys_nanosleep.
1685
1686 # cat set_ftrace_filter
1687 hrtimer_run_queues
1688 hrtimer_run_pending
1689 hrtimer_init
1690 hrtimer_cancel
1691 hrtimer_try_to_cancel
1692 hrtimer_forward
1693 hrtimer_start
1694 hrtimer_reprogram
1695 hrtimer_force_reprogram
1696 hrtimer_get_next_event
1697 hrtimer_interrupt
1698 hrtimer_nanosleep
1699 hrtimer_wakeup
1700 hrtimer_get_remaining
1701 hrtimer_get_res
1702 hrtimer_init_sleeper
1703
1704
1705 This is because the '>' and '>>' act just like they do in bash.
1706 To rewrite the filters, use '>'
1707 To append to the filters, use '>>'
1708
1709 To clear out a filter so that all functions will be recorded
1710 again:
1711
1712 # echo > set_ftrace_filter
1713 # cat set_ftrace_filter
1714 #
1715
1716 Again, now we want to append.
1717
1718 # echo sys_nanosleep > set_ftrace_filter
1719 # cat set_ftrace_filter
1720 sys_nanosleep
1721 # echo 'hrtimer_*' >> set_ftrace_filter
1722 # cat set_ftrace_filter
1723 hrtimer_run_queues
1724 hrtimer_run_pending
1725 hrtimer_init
1726 hrtimer_cancel
1727 hrtimer_try_to_cancel
1728 hrtimer_forward
1729 hrtimer_start
1730 hrtimer_reprogram
1731 hrtimer_force_reprogram
1732 hrtimer_get_next_event
1733 hrtimer_interrupt
1734 sys_nanosleep
1735 hrtimer_nanosleep
1736 hrtimer_wakeup
1737 hrtimer_get_remaining
1738 hrtimer_get_res
1739 hrtimer_init_sleeper
1740
1741
1742 The set_ftrace_notrace prevents those functions from being
1743 traced.
1744
1745 # echo '*preempt*' '*lock*' > set_ftrace_notrace
1746
1747 Produces:
1748
1749 # tracer: ftrace
1750 #
1751 # TASK-PID CPU# TIMESTAMP FUNCTION
1752 # | | | | |
1753 bash-4043 [01] 115.281644: finish_task_switch <-schedule
1754 bash-4043 [01] 115.281645: hrtick_set <-schedule
1755 bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set
1756 bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run
1757 bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion
1758 bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run
1759 bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop
1760 bash-4043 [01] 115.281648: wake_up_process <-kthread_stop
1761 bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process
1762
1763 We can see that there's no more lock or preempt tracing.
1764
1765
1766 Dynamic ftrace with the function graph tracer
1767 ---------------------------------------------
1768
1769 Although what has been explained above concerns both the
1770 function tracer and the function-graph-tracer, there are some
1771 special features only available in the function-graph tracer.
1772
1773 If you want to trace only one function and all of its children,
1774 you just have to echo its name into set_graph_function:
1775
1776 echo __do_fault > set_graph_function
1777
1778 will produce the following "expanded" trace of the __do_fault()
1779 function:
1780
1781 0) | __do_fault() {
1782 0) | filemap_fault() {
1783 0) | find_lock_page() {
1784 0) 0.804 us | find_get_page();
1785 0) | __might_sleep() {
1786 0) 1.329 us | }
1787 0) 3.904 us | }
1788 0) 4.979 us | }
1789 0) 0.653 us | _spin_lock();
1790 0) 0.578 us | page_add_file_rmap();
1791 0) 0.525 us | native_set_pte_at();
1792 0) 0.585 us | _spin_unlock();
1793 0) | unlock_page() {
1794 0) 0.541 us | page_waitqueue();
1795 0) 0.639 us | __wake_up_bit();
1796 0) 2.786 us | }
1797 0) + 14.237 us | }
1798 0) | __do_fault() {
1799 0) | filemap_fault() {
1800 0) | find_lock_page() {
1801 0) 0.698 us | find_get_page();
1802 0) | __might_sleep() {
1803 0) 1.412 us | }
1804 0) 3.950 us | }
1805 0) 5.098 us | }
1806 0) 0.631 us | _spin_lock();
1807 0) 0.571 us | page_add_file_rmap();
1808 0) 0.526 us | native_set_pte_at();
1809 0) 0.586 us | _spin_unlock();
1810 0) | unlock_page() {
1811 0) 0.533 us | page_waitqueue();
1812 0) 0.638 us | __wake_up_bit();
1813 0) 2.793 us | }
1814 0) + 14.012 us | }
1815
1816 You can also expand several functions at once:
1817
1818 echo sys_open > set_graph_function
1819 echo sys_close >> set_graph_function
1820
1821 Now if you want to go back to trace all functions you can clear
1822 this special filter via:
1823
1824 echo > set_graph_function
1825
1826
1827 trace_pipe
1828 ----------
1829
1830 The trace_pipe outputs the same content as the trace file, but
1831 the effect on the tracing is different. Every read from
1832 trace_pipe is consumed. This means that subsequent reads will be
1833 different. The trace is live.
1834
1835 # echo function > current_tracer
1836 # cat trace_pipe > /tmp/trace.out &
1837 [1] 4153
1838 # echo 1 > tracing_enabled
1839 # usleep 1
1840 # echo 0 > tracing_enabled
1841 # cat trace
1842 # tracer: function
1843 #
1844 # TASK-PID CPU# TIMESTAMP FUNCTION
1845 # | | | | |
1846
1847 #
1848 # cat /tmp/trace.out
1849 bash-4043 [00] 41.267106: finish_task_switch <-schedule
1850 bash-4043 [00] 41.267106: hrtick_set <-schedule
1851 bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set
1852 bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run
1853 bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion
1854 bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run
1855 bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop
1856 bash-4043 [00] 41.267110: wake_up_process <-kthread_stop
1857 bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process
1858 bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up
1859
1860
1861 Note, reading the trace_pipe file will block until more input is
1862 added. By changing the tracer, trace_pipe will issue an EOF. We
1863 needed to set the function tracer _before_ we "cat" the
1864 trace_pipe file.
1865
1866
1867 trace entries
1868 -------------
1869
1870 Having too much or not enough data can be troublesome in
1871 diagnosing an issue in the kernel. The file buffer_size_kb is
1872 used to modify the size of the internal trace buffers. The
1873 number listed is the number of entries that can be recorded per
1874 CPU. To know the full size, multiply the number of possible CPUS
1875 with the number of entries.
1876
1877 # cat buffer_size_kb
1878 1408 (units kilobytes)
1879
1880 Note, to modify this, you must have tracing completely disabled.
1881 To do that, echo "nop" into the current_tracer. If the
1882 current_tracer is not set to "nop", an EINVAL error will be
1883 returned.
1884
1885 # echo nop > current_tracer
1886 # echo 10000 > buffer_size_kb
1887 # cat buffer_size_kb
1888 10000 (units kilobytes)
1889
1890 The number of pages which will be allocated is limited to a
1891 percentage of available memory. Allocating too much will produce
1892 an error.
1893
1894 # echo 1000000000000 > buffer_size_kb
1895 -bash: echo: write error: Cannot allocate memory
1896 # cat buffer_size_kb
1897 85
1898
1899 -----------
1900
1901 More details can be found in the source code, in the
1902 kernel/trace/*.c files.