Home | History | Annotate | Download | only in docs

Lines Matching full:benchmark

0 # Benchmark Tools
9 $ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
12 Where `<old-benchmark>` and `<new-benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
14 `[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
21 Benchmark Time CPU Time Old Time New CPU Old CPU New
39 When a benchmark executable is run, the raw output from the benchmark is printed in real time to stdout. The sample output using `benchmark/basic_test` for both arguments looks like:
47 Benchmark Time CPU Iterations
57 Benchmark Time CPU Iterations
64 Benchmark Time CPU Time Old Time New CPU Old CPU New
85 $ compare.py benchmarks <benchmark_baseline> <benchmark_contender> [benchmark options]...
87 Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
89 `[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
98 Benchmark Time CPU Iterations
114 Benchmark Time CPU Iterations
127 Benchmark Time CPU Time Old Time New CPU Old CPU New
141 What it does is for the every benchmark from the first run it looks for the benchmark with exactly the same name in the second run, and then compares the results. If the names differ, the benchmark is omitted from the diff.
144 2. Compare two different filters of one benchmark
148 $ compare.py filters <benchmark> <filter_baseline> <filter_contender> [benchmark options]...
150 Where `<benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
152 Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary.
154 `[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
163 Benchmark Time CPU Iterations
174 Benchmark Time CPU Iterations
182 Benchmark Time CPU Time Old Time New CPU Old CPU New
191 As you can see, it applies filter to the benchmarks, both when running the benchmark, and before doing the diff. And to make the diff work, the matches are replaced with some common string. Thus, you can compare two different benchmark families within one benchmark binary.
194 3. Compare filter one from benchmark one to filter two from benchmark two:
198 $ compare.py filters <benchmark_baseline> <filter_baseline> <benchmark_contender> <filter_contender> [benchmark options]...
201 Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
203 Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary.
205 `[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
214 Benchmark Time CPU Iterations
225 Benchmark Time CPU Iterations
233 Benchmark Time CPU Time Old Time New CPU Old CPU New
241 This is a mix of the previous two modes, two (potentially different) benchmark binaries are run, and a different filter is applied to each one.