Home | History | Annotate | only in /external/grpc-grpc/tools/profiling/microbenchmarks/bm_diff
Up to higher level directory
NameDateSize
bm_build.py22-Oct-20202.5K
bm_constants.py22-Oct-20201.4K
bm_diff.py22-Oct-20207.7K
bm_main.py22-Oct-20204.8K
bm_run.py22-Oct-20204K
bm_speedup.py22-Oct-20201.6K
README.md22-Oct-20204.9K

README.md

      1 The bm_diff Family
      2 ====
      3 
      4 This family of python scripts can be incredibly useful for fast iteration over
      5 different performance tweaks. The tools allow you to save performance data from
      6 a baseline commit, then quickly compare data from your working branch to that
      7 baseline data to see if you have made any performance wins.
      8 
      9 The tools operate with three concrete steps, which can be invoked separately,
     10 or all together via the driver script, bm_main.py. This readme will describe 
     11 the typical workflow for these scripts, then it will include sections on the
     12 details of every script for advanced usage.
     13 
     14 ## Normal Workflow
     15 
     16 Let's say you are working on a performance optimization for grpc_error. You have
     17 made some significant changes and want to see some data. From your branch, run
     18 (ensure everything is committed first):
     19 
     20 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -d master`
     21 
     22 This will build the `bm_error` binary on your branch, and then it will checkout 
     23 master and build it there too. It will then run these benchmarks 5 times each. 
     24 Lastly it will compute the statistically significant performance differences 
     25 between the two branches. This should show the nice performance wins your 
     26 changes have made.
     27 
     28 If you have already invoked bm_main with `-d master`, you should instead use 
     29 `-o` for subsequent runs. This allows the script to skip re-building and 
     30 re-running the unchanged master branch. For example:
     31 
     32 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -o`
     33 
     34 This will only build and run `bm_error` on your branch. It will then compare
     35 the output to the saved runs from master.
     36 
     37 ## Advanced Workflow
     38 
     39 If you have a deeper knowledge of these scripts, you can use them to do more
     40 fine tuned benchmark comparisons. For example, you could build, run, and save
     41 the benchmark output from two different base branches. Then you could diff both
     42 of these baselines against your working branch to see how the different metrics
     43 change. The rest of this doc goes over the details of what each of the
     44 individual modules accomplishes.
     45 
     46 ## bm_build.py
     47 
     48 This scrips builds the benchmarks. It takes in a name parameter, and will
     49 store the binaries based on that. Both `opt` and `counter` configurations
     50 will be used. The `opt` is used to get cpu_time and real_time, and the
     51 `counters` build is used to track other metrics like allocs, atomic adds,
     52 etc etc etc.
     53 
     54 For example, if you were to invoke (we assume everything is run from the 
     55 root of the repo):
     56 
     57 `tools/profiling/microbenchmarks/bm_diff/bm_build.py -b bm_error -n baseline`
     58 
     59 then the microbenchmark binaries will show up under 
     60 `bm_diff_baseline/{opt,counters}/bm_error`
     61 
     62 ## bm_run.py
     63 
     64 This script runs the benchmarks. It takes a name parameter that must match the
     65 name that was passed to `bm_build.py`. The script then runs the benchmark
     66 multiple times (default is 20, can be toggled via the loops parameter). The
     67 output is saved as `<benchmark name>.<config>.<name>.<loop idx>.json`
     68 
     69 For example, if you were to run:
     70 
     71 `tools/profiling/microbenchmarks/bm_diff/bm_run.py -b bm_error -b baseline -l 5`
     72 
     73 Then an example output file would be `bm_error.opt.baseline.0.json`
     74 
     75 ## bm_diff.py
     76 
     77 This script takes in the output from two benchmark runs, computes the diff
     78 between them, and prints any significant improvements or regressions. It takes
     79 in two name parameters, old and new. These must have previously been built and
     80 run.
     81 
     82 For example, assuming you had already built and run a 'baseline' microbenchmark
     83 from master, and then you also built and ran a 'current' microbenchmark from
     84 the branch you were working on, you could invoke:
     85 
     86 `tools/profiling/microbenchmarks/bm_diff/bm_diff.py -b bm_error -o baseline -n current -l 5`
     87 
     88 This would output the percent difference between your branch and master.
     89 
     90 ## bm_main.py
     91 
     92 This is the driver script. It uses the previous three modules and does
     93 everything for you. You pass in the benchmarks to be run, the number of loops,
     94 number of CPUs to use, and the commit to compare to. Then the script will:
     95 * Build the benchmarks at head, then checkout the branch to compare to and
     96   build the benchmarks there
     97 * Run both sets of microbenchmarks
     98 * Run bm_diff.py to compare the two, outputs the difference.
     99 
    100 For example, one might run:
    101 
    102 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -d master`
    103 
    104 This would compare the current branch's error benchmarks to master.
    105 
    106 This script is invoked by our infrastructure on every PR to protect against
    107 regressions and demonstrate performance wins.
    108 
    109 However, if you are iterating over different performance tweaks quickly, it is
    110 unnecessary to build and run the baseline commit every time. That is why we
    111 provide a different flag in case you are sure that the baseline benchmark has
    112 already been built and run. In that case use the --old flag to pass in the name
    113 of the baseline. This will only build and run the current branch. For example:
    114 
    115 `tools/profiling/microbenchmarks/bm_diff/bm_main.py -b bm_error -l 5 -o old`
    116 
    117