-
-
Notifications
You must be signed in to change notification settings - Fork 604
Trace analysis using trace.py
The purpose of the script is to display trace data, possibly applying various transformations to make the data more digestable. It operates on a trace file, which needs to be obtained beforehand.
See Tracepoints page to learn more about tracing OSv, how to enable it, etc.
Trace analysis requires a collection of scripts from osv.git/scripts
. The command line interface is accessed by invoking scripts/trace.py
from base OSv directory like this:
$ scripts/trace.py -h
It's perfectly ok to invoke the script like above but it may be more convenient to create an alias for a shorthand version in your development environment using the following shell commands (you can put it in .bashrc):
OSV_BASE=~/src/osv
PYTHONPATH=PYTHONPATH:${OSV_BASE}/scripts
alias trace=${OSV_BASE}/scripts/trace.py
Now you can call the script like this from any location:
$ trace -h
For brevity I will use the shorter version in this document.
Currently the trace file can be obtained only via GDB. Support for direct download from OSv instance is planned.
To save the trace execute:
$ trace extract
This will invoke GDB and the trace will be saved to a file named tracefile
(by default).
First, run OSv with tracepoints enabled. I use one of the tests in this example:
$ scripts/run.py -H --trace=memory_malloc --trace-backtrace -e "/tests/misc-fs-stress.so file"
Explanation of parameters:
-
-H
means the OSv instance should not terminate on poweroff, we use it because we need to get the trace before it powers off -
--trace=memory_malloc
enablesmemory_malloc
tracepoint which we are interested in collecting -
--trace-backtrace
enables collecting of call backtrace at tracepoints; we want to know where the tracepoints are called from
Let the test run for a while, then collect the trace:
$ trace extract
Then you can print some statistics about the trace file:
$ trace summary
Collected 31742 samples spanning 2.57 s
Tracepoint statistics:
name count
---- -----
memory_malloc 31742
You can list the samples:
$ trace list
0xffffc0003e990010 0 1389705265.413128853 memory_malloc buf=0xffffc0003cb73500, len=152
0xffffc0003e990010 0 1389705265.413129568 memory_malloc buf=0xffffc00036677980, len=80
0xffffc0003e990010 0 1389705265.413130045 memory_malloc buf=0xffffc0003cb44b80, len=80
...
Meaning of the columns is:
Thread ID CPU TIMESTAMP TRACEPOINT ARGUMENTS
0xffffc0003e990010 0 1389705265.413128853 memory_malloc buf=0xffffc0003cb73500, len=152
If backtrace was collected you can show it by passing -b
option to trace.py list
:
$ trace list -b
0x0000000000d0ee58 0 1389705320.885835886 memory_malloc buf=0xffffc00035f47800, len=192 [log, operator(), zfs_kmem_alloc, zfs_range_lock, zfs_get_data, zil_lwb_commit, zil_close, zfsvfs_teardown, sys_umount2, unmount_rootfs]
0x0000000000d0ee58 0 1389705320.885836601 memory_malloc buf=0xffffc0003fe1f158, len=8 [log, operator(), zfs_kmem_alloc, dmu_buf_hold_array_by_dnode, zfs_get_data, zil_lwb_commit, zil_close, zfsvfs_teardown, sys_umount2, unmount_rootfs]
The backtrace which is associated with trace samples contains only raw addresses. The script is using addr2line
to resolve these symbols. For that to work it needs to have an object file with debug information. By default it uses build/release/loader.elf
for that. You can change the default by passing -d
or --debug
, in which case it uses build/debug/loader.elf
. You can also point it to arbitrary object file using -e
or --exe
option. You can disable symbol resolution by passing -x
or --no-resolve
in which case only raw addresses are displayed.
The user should ensure that correct object is used. There could be no sign, other than suspicious symbol chain in the trace, that the wrong object file was used for symbol resolution.
By default only function name is displayed. More information can be added using these options, any combination is allowed:
-
-L
,--show-line-number
, eg:log:320
-
-A
,--show-address
, eg:log @ 0x2032e3
-
-F
,--show-file-name
, eg:log /home/tgrabiec/src/osv/include/osv/trace.hh
Applying all of the above (-LAF
) will yield log /home/tgrabiec/src/osv/include/osv/trace.hh:320 @ 0x2032e3
Symbol resolution options can be used with all commands which display backtrace.
Tracepoints only carry information about a point in time a particular place in code was reached. However we can use a pair of trace samples to calculate the amount of time it took to execute the code between them. By convention the ending sample has the name starting with the name of entry sample and ending with either _ret
or _err
. The pair can be used to create a so called timed trace, which is basically the entry trace plus duration.
You can print timed trace summary using --timed
option of summary
command:
$ trace summary --timed
Collected 36843 samples spanning 8.85 s
Tracepoint statistics:
name count
---- -----
sched_wait 18420
sched_wait_ret 18423
Timed tracepoints [ms]:
name count min 50% 90% 99% 99.9% max total
---- ----- --- --- --- --- ----- --- -----
sched_wait 18255 0.000 0.043 15.210 2226.438 4232.178 6125.616 1066349.137
You can list timed samples using list-timed
command. Passing --sort duration
sorts by duration, descending. The duration is in the column next to the one with tracepoint name:
$ trace list-timed --sort duration
0xffffc00009680000 1 1389721284.391018152 6125.616 sched_wait
0xffffc0003cc1d000 0 1389721284.387134552 6022.635 sched_wait
0xffffc0003f386000 0 1389721281.852320671 4893.802 sched_wait
0xffffc0003eece000 0 1389721281.852322102 4893.799 sched_wait
0xffffc0003ef37000 0 1389721281.852324009 4893.791 sched_wait
0xffffc0003ef4c000 0 1389721281.852328539 4893.791 sched_wait
0xffffc0003ef61000 0 1389721281.852325916 4893.791 sched_wait
0xffffc0003eef8000 0 1389721281.852327824 4893.790 sched_wait
When backtrace collecting is enabled (--trace-backtrace
) trace samples carry information from where they were hit. This information can be used to identify hot places in code, or generally annotate code with information carried by trace samples. Such annotated call-graph is called a "profile" in this document.
The prof
command can be used to display a profile with the number of times tracepoints were hit:
Example:
$ trace prof
(100.00%, #31742) All
|-- (79.68%, #25291) zfs_kmem_alloc
| |-- (16.95%, #5380) zio_create
| | |-- (8.96%, #2844) dmu_buf_hold_array_by_dnode
| | | |-- (8.86%, #2811) zfs_get_data
| | | | zil_lwb_commit
| | | | zil_commit
| | | | zfsvfs_teardown
| | | | sys_umount2
| | | |
| | | |-- (0.10%, #32) dmu_buf_hold_array
| | | | space_map_sync
| | | | metaslab_sync
| | | | vdev_sync
| | | | spa_sync
| | | |
| | | \-- (0.00%, #1) space_map_load
| | | metaslab_activate
| | | metaslab_group_alloc
| | | zio_dva_allocate
| | | zio_execute
(...)
The tree shows all paths leading to memory_malloc
. On every level nodes are sorted by number of hits in descending order. So the most frequent path is at the top.
We can see that 79.68% of samples come from zfs_kmem_alloc
, which is called from different places. Among these places the most frequent is zio_create
, which is hit in 16.95% of all samples. And so on.
When there is only one path leading to given node the path elements are lined up one after another like this:
- (8.86%, #2811) zfs_get_data
zil_lwb_commit
zil_commit
zfsvfs_teardown
sys_umount2
By default prof
shows all tracepoints. If you have many tracepoints in the trace you can narrow down analysis to one tracepoint using -t
option, eg prof -t memory_malloc
.
Wait profile belongs to the class of profiles which annotate call-graph with a time-weight, typically representing time spent inside particular place in code. These profiles are called time profiles.
Wait profile is using time spent in sched::thread::wait()
to annotate the call-graph. The time is calculated based on samples of two tracepoints: sched_wait
and sched_wait_ret
. To be able to show this profile, you need to start OSv with --trace=sched_wait*
or enable these tracepoints dynamically.
Example
First collect some some traces by running OSv like this:
$ scripts/run.py -H -e --trace=vfs_pwritev* --trace=sched_wait* --trace-backtrace -e "tests/misc-fs-stress.so ff"
The vfs_pwritev*
tracepoints will be needed later to investigate latency of writes.
After the test is done execute:
$ trace extract
$ trace prof-wait -g thread
You will see a lot of output looking like this:
=== Thread 0xffffc0003d511000 ===
12.10 s (100.00%, #581) All
|-- 12.10 s (99.98%, #561) do_wait_until
| TQ_SLEEP
| sched::thread::main()
|
|-- 1.35 ms (0.01%, #14) waiter::wait(sched::timer*) const
| wait_until<memory::pool::free_different_cpu(memory::pool::free_object*, unsigned int)::__lambda1>
| memory::pool::free(void*)
| |-- 582.01 us (0.00%, #3) zio_destroy
| | zio_execute
| | zio_notify_parent
| | zio_execute
| | taskqueue_run_locked
| |
| |-- 466.61 us (0.00%, #4) dbuf_write_done
| | arc_write_done
| | zio_done
| | zio_execute
| | zio_notify_parent
| |
| |-- 274.89 us (0.00%, #6) zio_remove_child
| | zio_execute
| | |-- 196.16 us (0.00%, #4) zio_notify_parent
| | | zio_execute
| | | taskqueue_run_locked
| | |
| | \-- 78.73 us (0.00%, #2) taskqueue_run_locked
| | sched::thread::main()
What to do with all this information depends on what we're trying to find out. The test we ran is measuring throughput of POSIX write()
which eventually calls zfs_write()
. In addition to that the test records latency of write and prints a summary of after the test, which may look like this:
Latency of write() [s]:
0 0.000000092
0.5 0.000004264
0.9 0.000005597
0.99 0.000012213
0.999 0.004506400
1.0 2.430870533
As we can see, in this run there were calls which took as long as 2.4s to return. Suppose we want to find out why. Let's see if the latency is really inside vfs_pwritev:
$ trace summary --timed
(...)
Timed tracepoints [ms]:
name count min 50% 90% 99% 99.9% max total
---- ----- --- --- --- --- ----- --- -----
vfs_pwritev 44949 0.004 0.005 0.008 1.089 18.284 2430.869 8086.956
(...)
Yes, max duration of vfs_pwritev
is 2430ms
, which is what the test reports. Let's locate the long sample by using list-timed
:
$ trace list-timed --sort=duration | grep pwritev | head
0xffffc0003e990010 1 1389797838.338931799 2430.869 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
0xffffc0003e990010 0 1389797842.818786860 578.166 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
0xffffc0003e990010 1 1389797837.760517836 578.070 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
0xffffc0003e990010 0 1389797843.445839882 232.688 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
0xffffc0003e990010 1 1389797840.812511921 143.236 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
0xffffc0003e990010 0 1389797842.681091785 95.171 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
0xffffc0003e990010 1 1389797841.075021982 86.724 vfs_pwritev 3 0x00002000002ff9b0 0x2 0x-1
We now know when the issue happened. Now we can pass that information to prof-wait
(note the presence of time units in parameters):
$ trace prof-wait --since 1389797838.338931799s --period 2430.869ms
What we do here is called time slicing. The profile will include only samples which overlap with specified period. If a sample crosses the edge of the time range its duration will be trimmed to reflect only the overlapping part.
Searching for zfs_write
gives an evidence that the pause is caused by waiting on ZIO to complete:
=== Thread 0xffffc0003e990010 ===
2.43 s (100.00%, #1) All
waiter::wait(sched::timer*) const
condvar::wait(lockfree::mutex*, unsigned long)
zio_wait
dmu_tx_count_write
dmu_tx_hold_write
zfs_write
vfs_file::write(uio*, int)
sys_write
The investigation process now continues. Why are we waiting for ZIO to complete during async write? This subject is out of scope of this tutorial.
The way a profile is displayed can be controlled with various options.
Grouping
By default samples are presented in on tree. The tool allows to group samples by a couple of properties via -g
or --group-by
option:
-
-g thread
- groups data by thread -
-g cpu
- groups data by CPU
Orientation
By default frames closer to the tracepoint call-site are closer to the root. We call this kind of profile callee-oriented, because callees are higher. It shows paths leading to tracepoint.
You can reverse the orientation by passing -r
or --caller-oriented
. In this case we will have the most-bottom frames in the root, its callees branching from them, and finally tracepoint call-sites in the leafs. This orientation is useful to show how bigger operations are composed.
NOTE: be careful with caller-oriented profiles. If the maximum backtrace length used during collecting of trace is too low it may not cover bottom-most frames (thread_main()
) for some samples. As a result traces which would otherwise end up in a single root will be spread across multiple 1st-level nodes.
Example of memory_malloc hit profile, caller oriented:
$ trace prof -r
(100.00%, #31742) All
|-- (29.60%, #9397) sched::thread::main()
| |-- (10.02%, #3182) txg_sync_thread
| | |-- (10.02%, #3180) spa_sync
| | | |-- (6.57%, #2086) bplist_iterate
| | | | spa_free_sync_cb
| | | | |-- (3.29%, #1043) zio_create
| | | | | zfs_kmem_alloc
| | | | |
| | | | \-- (3.29%, #1043) zio_add_child
| | | | zfs_kmem_alloc
| | | |
| | | |-- (1.90%, #603) dsl_pool_sync
| | | | |-- (1.78%, #565) dmu_objset_sync
| | | | | |-- (1.68%, #532) zil_sync
| | | | | | bplist_append
| | | | | | zfs_kmem_alloc
| | | | | |
| | | | | |-- (0.03%, #11) zio_create
| | | | | | zfs_kmem_alloc
| | | | | |
| | | | | |-- (0.03%, #11) zio_add_child
| | | | | | zfs_kmem_alloc
| | | | | |
| | | | | \-- (0.03%, #11) arc_write
| | | | | zfs_kmem_alloc
Function rooting
Some times you want to focus on particular function. For example, I would like to know the paths leading to memory allocation which are coming from dsl_pool_sync()
. There's an option which allows that: --function
. If that option is used with --caller-oriented
, it will show callees of that function. Otherwise it will show the callers.
Example (caller oriented, showing callees of dsl_pool_sync
):
$ trace prof -r --function dsl_pool_sync
(100.00%, #813) All
dsl_pool_sync
|-- (76.51%, #622) dmu_objset_sync
| |-- (65.44%, #532) zil_sync
| | bplist_append
| | zfs_kmem_alloc
| |
| |-- (7.01%, #57) dnode_sync
| | |-- (6.64%, #54) dbuf_sync_leaf
| | | dbuf_write
| | | |-- (2.21%, #18) zio_create
| | | | zfs_kmem_alloc
| | | |
| | | |-- (2.21%, #18) zio_add_child
| | | | zfs_kmem_alloc
| | | |
| | | \-- (2.21%, #18) arc_write
| | | zfs_kmem_alloc
| | |
(...)
Filtering
Often the profile has too much data. There are some switches which you can use to filter data. One of them is --max-levels
which reduces the number of tree levels displayed in the profile. Example:
$ trace prof --max-levels 1
(100.00%, #31742) All
|-- (79.68%, #25291) zfs_kmem_alloc
|
|-- (14.90%, #4728) operator new(unsigned long)
|
|-- (3.30%, #1047) alloc_bio
|
|-- (2.10%, #665) virtio::vring::add_buf(void*)
| virtio::vring::add_buf_wait(void*)
| virtio::blk::make_request(bio*)
| blk_strategy
| vdev_disk_start_bio
| zio_vdev_io_start
| zio_execute
|
\-- (0.03%, #11) AcpiUtAllocate
You can also use --min-duration
in time profiles to hide nodes with resident time shorter than specified.
You can filter-out infrequently hit paths using --min-hits
, eg:
-
--min-hits 500
- shows only paths which were hit at least 500 times -
--min-hist 10%
- shows only paths which were visited at least in 10% of cases
OSv has support for creating trace samples from network packets. To enable capturing of packets enable the following tracepoints:
net_packet_loopback
net_packet_eth
- or just
net_packet*
to cover all
For example, the guest could be started like this:
$ sudo scripts/run.py -nv --trace net_packet*
To demonstrate that tracing works, let's put some network traffic through OSv:
$ curl http://osv:8000/os/version
"v0.07-113-gaeafbcb"
Let's extract the trace:
$ trace extract
Now we can examine the packets:
$ trace tcpdump
20:16:49.486215 IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [S], seq 3523681620, win 29200, options [mss 1460,sackOK,TS val 92119968 ecr 0,nop,wscale 7], length 0
20:16:49.486269 ARP, Request who-has 192.168.122.1 tell 192.168.122.89, length 28
20:16:49.486457 ARP, Reply 192.168.122.1 is-at ba:ec:0e:fc:7f:a4, length 28
20:16:49.486467 IP 192.168.122.89.8000 > 192.168.122.1.43616: Flags [S.], seq 1855917805, ack 3523681621, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 4007040777 ecr 92119968], length 0
20:16:49.486709 IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [.], ack 1, win 229, options [nop,nop,TS val 92119968 ecr 4007040777], length 0
20:16:49.487197 IP truncated-ip - 20 bytes missing! 192.168.122.1.43616 > 192.168.122.89.8000: Flags [P.], seq 1:83, ack 1, win 229, options [nop,nop,TS val 92119968 ecr 4007040777], length 82
20:16:49.487758 IP truncated-ip - 23 bytes missing! 192.168.122.89.8000 > 192.168.122.1.43616: Flags [P.], seq 1:86, ack 83, win 1040, options [nop,nop,TS val 4007040778 ecr 92119968], length 85
20:16:49.487840 IP 192.168.122.89.8000 > 192.168.122.1.43616: Flags [F.], seq 86, ack 83, win 1040, options [nop,nop,TS val 4007040778 ecr 92119968], length 0
20:16:49.487980 IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [.], ack 86, win 229, options [nop,nop,TS val 92119968 ecr 4007040778], length 0
20:16:49.488048 IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [F.], seq 83, ack 87, win 229, options [nop,nop,TS val 92119968 ecr 4007040778], length 0
20:16:49.488061 IP 192.168.122.89.8000 > 192.168.122.1.43616: Flags [.], ack 84, win 1040, options [nop,nop,TS val 4007040779 ecr 92119968], length 0
Note that some data packets have this X bytes missing!
warning. That's because there is a limit on how many bytes are captured per-packet. That limit is currently 128 bytes (See capture_limit
@ net_trace.cc
).
You can also see raw samples in trace list
:
$ trace list
...
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.487980366 net_packet_eth if=2, data={52 54 00 12 34 56 ba ec 0e fc 7f a4 08 00 45 00 00 34 be f7 40 00 40 06 06 21 c0 a8 7a 01 c0 a8 7a 59 aa 60 1f 40 d2 07 1d a7 6e 9f 0f 43 80 10 00 e5 75 d2 00 00 01 01 08 0a 05 7d a3 a0 ee d6 97 0a}
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.488048553 net_packet_eth if=2, data={52 54 00 12 34 56 ba ec 0e fc 7f a4 08 00 45 00 00 34 be f8 40 00 40 06 06 20 c0 a8 7a 01 c0 a8 7a 59 aa 60 1f 40 d2 07 1d a7 6e 9f 0f 44 80 11 00 e5 75 d2 00 00 01 01 08 0a 05 7d a3 a0 ee d6 97 0a}
You can convert packet data to human readable form in the listing by passing --tcpdump
flag:
$ trace list --tcpdump
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.486467838 net_packet_eth IP 192.168.122.89.8000 > 192.168.122.1.43616: Flags [S.], seq 1855917805, ack 3523681621, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 4007040777 ecr 92119968], length 0
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.486709595 net_packet_eth IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [.], ack 1855917806, win 229, options [nop,nop,TS val 92119968 ecr 4007040777], length 0
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.487197876 net_packet_eth IP truncated-ip - 20 bytes missing! 192.168.122.1.43616 > 192.168.122.89.8000: Flags [P.], seq 3523681621:3523681703, ack 1855917806, win 229, options [nop,nop,TS val 92119968 ecr 4007040777], length 82
0xffffc0003e352010 0 1398795409.487758398 net_packet_eth IP truncated-ip - 23 bytes missing! 192.168.122.89.8000 > 192.168.122.1.43616: Flags [P.], seq 1855917806:1855917891, ack 3523681703, win 1040, options [nop,nop,TS val 4007040778 ecr 92119968], length 85
0xffffc0003e352010 0 1398795409.487840176 net_packet_eth IP 192.168.122.89.8000 > 192.168.122.1.43616: Flags [F.], seq 1855917891, ack 3523681703, win 1040, options [nop,nop,TS val 4007040778 ecr 92119968], length 0
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.487980366 net_packet_eth IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [.], ack 1855917891, win 229, options [nop,nop,TS val 92119968 ecr 4007040778], length 0
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.488048553 net_packet_eth IP 192.168.122.1.43616 > 192.168.122.89.8000: Flags [F.], seq 3523681703, ack 1855917892, win 229, options [nop,nop,TS val 92119968 ecr 4007040778], length 0
0xffffc0003e24b2b0 virtio-net-rx 0 1398795409.488061905 net_packet_eth IP 192.168.122.89.8000 > 192.168.122.1.43616: Flags [.], ack 3523681704, win 1040, options [nop,nop,TS val 4007040779 ecr 92119968], length 0
The last method may be very useful in some situations, for example when one needs to correlate network packet traffic with other tracepoints.
Another thing you can use the listing for is to determine which path in the OS triggered particular packet to be sent (remember to pass --trace-backtrace
to run.py). Example:
$ trace list --tcpdump -bL
...
0xffffc0003e352010 0 1398796772.218804121 net_packet_eth IP 192.168.122.89.8000 > 192.168.122.1.43812: Flags [F.], seq 3676057925, ack 1421681876, win 1040, options [nop,nop,TS val 3400817508 ecr 92460269], length 0 [if_transmit:124, ether_output_frame:391, ip_output(mbuf*, mbuf*, route*, int, ip_moptions*, inpcb*):613, tcp_output:1235, tcp_output_disconnect:304, sys_shutdown:719, shutdown:298, 0x10000009a346, 0x10000009a961