Home | History | Annotate | Download | only in tools
      1 Demonstrations of biotop, the Linux eBPF/bcc version.
      2 
      3 
      4 Short for block device I/O top, biotop summarizes which processes are
      5 performing disk I/O. It's top for disks. Sample output:
      6 
      7 # ./biotop
      8 Tracing... Output every 1 secs. Hit Ctrl-C to end
      9 
     10 08:04:11 loadavg: 1.48 0.87 0.45 1/287 14547
     11 
     12 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
     13 14501  cksum            R 202 1   xvda1      361   28832   3.39
     14 6961   dd               R 202 1   xvda1     1628   13024   0.59
     15 13855  dd               R 202 1   xvda1     1627   13016   0.59
     16 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   3.00
     17 1880   supervise        W 202 1   xvda1        2       8   6.71
     18 1873   supervise        W 202 1   xvda1        2       8   2.51
     19 1871   supervise        W 202 1   xvda1        2       8   1.57
     20 1876   supervise        W 202 1   xvda1        2       8   1.22
     21 1892   supervise        W 202 1   xvda1        2       8   0.62
     22 1878   supervise        W 202 1   xvda1        2       8   0.78
     23 1886   supervise        W 202 1   xvda1        2       8   1.30
     24 1894   supervise        W 202 1   xvda1        2       8   3.46
     25 1869   supervise        W 202 1   xvda1        2       8   0.73
     26 1888   supervise        W 202 1   xvda1        2       8   1.48
     27 
     28 By default the screen refreshes every 1 second, and shows the top 20 disk
     29 consumers, sorted on total Kbytes. The first line printed is the header,
     30 which has the time and then the contents of /proc/loadavg.
     31 
     32 For the interval summarized by the output above, the "cksum" command performed
     33 361 disk reads to the "xvda1" device, for a total of 28832 Kbytes, with an
     34 average I/O time of 3.39 ms. Two "dd" processes were also reading from the
     35 same disk, which a higher I/O rate and lower latency. While the average I/O
     36 size is not printed, it can be determined by dividing the Kbytes column by
     37 the I/O column.
     38 
     39 The columns through to Kbytes show the workload applied. The final column,
     40 AVGms, shows resulting performance. Other bcc tools can be used to get more
     41 details when needed: biolatency and biosnoop.
     42 
     43 Many years ago I created the original "iotop", and later regretted not calling
     44 it diskiotop or blockiotop, as "io" alone is ambiguous. This time it is biotop.
     45 
     46 
     47 The -C option can be used to prevent the screen from clearing (my preference).
     48 Here's using it with a 5 second interval:
     49 
     50 # ./biotop -C 5
     51 Tracing... Output every 5 secs. Hit Ctrl-C to end
     52 
     53 08:09:44 loadavg: 0.42 0.44 0.39 2/282 22115
     54 
     55 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
     56 22069  dd               R 202 1   xvda1     5993   47976   0.33
     57 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.67
     58 1866   svscan           R 202 1   xvda1       33     132   1.24
     59 1880   supervise        W 202 1   xvda1       10      40   0.56
     60 1873   supervise        W 202 1   xvda1       10      40   0.79
     61 1871   supervise        W 202 1   xvda1       10      40   0.78
     62 1876   supervise        W 202 1   xvda1       10      40   0.68
     63 1892   supervise        W 202 1   xvda1       10      40   0.71
     64 1878   supervise        W 202 1   xvda1       10      40   0.65
     65 1886   supervise        W 202 1   xvda1       10      40   0.78
     66 1894   supervise        W 202 1   xvda1       10      40   0.80
     67 1869   supervise        W 202 1   xvda1       10      40   0.91
     68 1888   supervise        W 202 1   xvda1       10      40   0.63
     69 22069  bash             R 202 1   xvda1        1      16  19.94
     70 9251   kworker/u16:2    W 202 16  xvdb         2       8   0.13
     71 
     72 08:09:49 loadavg: 0.47 0.44 0.39 1/282 22231
     73 
     74 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
     75 22069  dd               R 202 1   xvda1    13450  107600   0.35
     76 22199  cksum            R 202 1   xvda1      941   45548   4.63
     77 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.93
     78 24467  kworker/0:2      W 202 16  xvdb         1      64   0.28
     79 1880   supervise        W 202 1   xvda1       10      40   0.81
     80 1873   supervise        W 202 1   xvda1       10      40   0.81
     81 1871   supervise        W 202 1   xvda1       10      40   1.03
     82 1876   supervise        W 202 1   xvda1       10      40   0.76
     83 1892   supervise        W 202 1   xvda1       10      40   0.74
     84 1878   supervise        W 202 1   xvda1       10      40   0.94
     85 1886   supervise        W 202 1   xvda1       10      40   0.76
     86 1894   supervise        W 202 1   xvda1       10      40   0.69
     87 1869   supervise        W 202 1   xvda1       10      40   0.72
     88 1888   supervise        W 202 1   xvda1       10      40   1.70
     89 22199  bash             R 202 1   xvda1        2      20   0.35
     90 482    xfsaild/md0      W 202 16  xvdb         5      13   0.27
     91 482    xfsaild/md0      W 202 32  xvdc         2       8   0.33
     92 31331  pickup           R 202 1   xvda1        1       4   0.31
     93 
     94 08:09:54 loadavg: 0.51 0.45 0.39 2/282 22346
     95 
     96 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
     97 22069  dd               R 202 1   xvda1    14689  117512   0.32
     98 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.33
     99 1880   supervise        W 202 1   xvda1       10      40   0.65
    100 1873   supervise        W 202 1   xvda1       10      40   1.08
    101 1871   supervise        W 202 1   xvda1       10      40   0.66
    102 1876   supervise        W 202 1   xvda1       10      40   0.79
    103 1892   supervise        W 202 1   xvda1       10      40   0.67
    104 1878   supervise        W 202 1   xvda1       10      40   0.66
    105 1886   supervise        W 202 1   xvda1       10      40   1.02
    106 1894   supervise        W 202 1   xvda1       10      40   0.88
    107 1869   supervise        W 202 1   xvda1       10      40   0.89
    108 1888   supervise        W 202 1   xvda1       10      40   1.25
    109 
    110 08:09:59 loadavg: 0.55 0.46 0.40 2/282 22461
    111 
    112 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
    113 22069  dd               R 202 1   xvda1    14442  115536   0.33
    114 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   3.46
    115 1880   supervise        W 202 1   xvda1       10      40   0.87
    116 1873   supervise        W 202 1   xvda1       10      40   0.87
    117 1871   supervise        W 202 1   xvda1       10      40   0.78
    118 1876   supervise        W 202 1   xvda1       10      40   0.86
    119 1892   supervise        W 202 1   xvda1       10      40   0.89
    120 1878   supervise        W 202 1   xvda1       10      40   0.87
    121 1886   supervise        W 202 1   xvda1       10      40   0.86
    122 1894   supervise        W 202 1   xvda1       10      40   1.06
    123 1869   supervise        W 202 1   xvda1       10      40   1.12
    124 1888   supervise        W 202 1   xvda1       10      40   0.98
    125 
    126 08:10:04 loadavg: 0.59 0.47 0.40 3/282 22576
    127 
    128 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
    129 22069  dd               R 202 1   xvda1    14179  113432   0.34
    130 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.39
    131 1880   supervise        W 202 1   xvda1       10      40   0.81
    132 1873   supervise        W 202 1   xvda1       10      40   1.02
    133 1871   supervise        W 202 1   xvda1       10      40   1.15
    134 1876   supervise        W 202 1   xvda1       10      40   1.10
    135 1892   supervise        W 202 1   xvda1       10      40   0.77
    136 1878   supervise        W 202 1   xvda1       10      40   0.72
    137 1886   supervise        W 202 1   xvda1       10      40   0.81
    138 1894   supervise        W 202 1   xvda1       10      40   0.86
    139 1869   supervise        W 202 1   xvda1       10      40   0.83
    140 1888   supervise        W 202 1   xvda1       10      40   0.79
    141 24467  kworker/0:2      R 202 32  xvdc         3      12   0.26
    142 1056   cron             R 202 1   xvda1        2       8   0.30
    143 24467  kworker/0:2      R 202 16  xvdb         1       4   0.23
    144 
    145 08:10:09 loadavg: 0.54 0.46 0.40 2/281 22668
    146 
    147 PID    COMM             D MAJ MIN DISK       I/O  Kbytes  AVGms
    148 22069  dd               R 202 1   xvda1      250    2000   0.34
    149 326    jbd2/xvda1-8     W 202 1   xvda1        3     168   2.40
    150 1880   supervise        W 202 1   xvda1        8      32   0.93
    151 1873   supervise        W 202 1   xvda1        8      32   0.76
    152 1871   supervise        W 202 1   xvda1        8      32   0.60
    153 1876   supervise        W 202 1   xvda1        8      32   0.61
    154 1892   supervise        W 202 1   xvda1        8      32   0.68
    155 1878   supervise        W 202 1   xvda1        8      32   0.90
    156 1886   supervise        W 202 1   xvda1        8      32   0.57
    157 1894   supervise        W 202 1   xvda1        8      32   0.97
    158 1869   supervise        W 202 1   xvda1        8      32   0.69
    159 1888   supervise        W 202 1   xvda1        8      32   0.67
    160 
    161 This shows another "dd" command reading from xvda1. On this system, various
    162 "supervise" processes do 8 disk writes per second, every second (they are
    163 creating and updating "status" files).
    164 
    165 
    166 USAGE message:
    167 
    168 # ./biotop.py -h
    169 usage: biotop.py [-h] [-C] [-r MAXROWS] [interval] [count]
    170 
    171 Block device (disk) I/O by process
    172 
    173 positional arguments:
    174   interval              output interval, in seconds
    175   count                 number of outputs
    176 
    177 optional arguments:
    178   -h, --help            show this help message and exit
    179   -C, --noclear         don't clear the screen
    180   -r MAXROWS, --maxrows MAXROWS
    181                         maximum rows to print, default 20
    182 
    183 examples:
    184     ./biotop            # block device I/O top, 1 second refresh
    185     ./biotop -C         # don't clear the screen
    186     ./biotop 5          # 5 second summaries
    187     ./biotop 5 10       # 5 second summaries, 10 times only
    188