Home | History | Annotate | only in /external/google-benchmark
Up to higher level directory
NameDateSize
.clang-format05-Oct-201752
.gitignore05-Oct-2017469
.travis-libcxx-setup.sh05-Oct-2017923
.travis.yml05-Oct-20173.4K
.ycm_extra_conf.py05-Oct-20173.6K
Android.bp05-Oct-20171.3K
appveyor.yml05-Oct-20171.5K
AUTHORS05-Oct-20171.2K
cmake/05-Oct-2017
CMakeLists.txt05-Oct-20176.8K
CONTRIBUTING.md05-Oct-20172.4K
CONTRIBUTORS05-Oct-20172.1K
include/05-Oct-2017
LICENSE05-Oct-201711.1K
mingw.py05-Oct-201710.1K
MODULE_LICENSE_APACHE205-Oct-20170
NOTICE05-Oct-201711.1K
README.md05-Oct-201719.9K
README.version05-Oct-2017140
src/05-Oct-2017
test/05-Oct-2017
tools/05-Oct-2017

README.md

      1 # benchmark
      2 [![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/google/benchmark)
      3 [![Build status](https://ci.appveyor.com/api/projects/status/u0qsyp7t1tk7cpxs/branch/master?svg=true)](https://ci.appveyor.com/project/google/benchmark/branch/master)
      4 [![Coverage Status](https://coveralls.io/repos/google/benchmark/badge.svg)](https://coveralls.io/r/google/benchmark)
      5 
      6 A library to support the benchmarking of functions, similar to unit-tests.
      7 
      8 Discussion group: https://groups.google.com/d/forum/benchmark-discuss
      9 
     10 IRC channel: https://freenode.net #googlebenchmark
     11 
     12 [Known issues and common problems](#known-issues)
     13 
     14 ## Example usage
     15 ### Basic usage
     16 Define a function that executes the code to be measured.
     17 
     18 ```c++
     19 static void BM_StringCreation(benchmark::State& state) {
     20   while (state.KeepRunning())
     21     std::string empty_string;
     22 }
     23 // Register the function as a benchmark
     24 BENCHMARK(BM_StringCreation);
     25 
     26 // Define another benchmark
     27 static void BM_StringCopy(benchmark::State& state) {
     28   std::string x = "hello";
     29   while (state.KeepRunning())
     30     std::string copy(x);
     31 }
     32 BENCHMARK(BM_StringCopy);
     33 
     34 BENCHMARK_MAIN();
     35 ```
     36 
     37 ### Passing arguments
     38 Sometimes a family of benchmarks can be implemented with just one routine that
     39 takes an extra argument to specify which one of the family of benchmarks to
     40 run. For example, the following code defines a family of benchmarks for
     41 measuring the speed of `memcpy()` calls of different lengths:
     42 
     43 ```c++
     44 static void BM_memcpy(benchmark::State& state) {
     45   char* src = new char[state.range(0)];
     46   char* dst = new char[state.range(0)];
     47   memset(src, 'x', state.range(0));
     48   while (state.KeepRunning())
     49     memcpy(dst, src, state.range(0));
     50   state.SetBytesProcessed(int64_t(state.iterations()) *
     51                           int64_t(state.range(0)));
     52   delete[] src;
     53   delete[] dst;
     54 }
     55 BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
     56 ```
     57 
     58 The preceding code is quite repetitive, and can be replaced with the following
     59 short-hand. The following invocation will pick a few appropriate arguments in
     60 the specified range and will generate a benchmark for each such argument.
     61 
     62 ```c++
     63 BENCHMARK(BM_memcpy)->Range(8, 8<<10);
     64 ```
     65 
     66 By default the arguments in the range are generated in multiples of eight and
     67 the command above selects [ 8, 64, 512, 4k, 8k ]. In the following code the
     68 range multiplier is changed to multiples of two.
     69 
     70 ```c++
     71 BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
     72 ```
     73 Now arguments generated are [ 8, 16, 32, 64, 128, 256, 512, 1024, 2k, 4k, 8k ].
     74 
     75 You might have a benchmark that depends on two or more inputs. For example, the
     76 following code defines a family of benchmarks for measuring the speed of set
     77 insertion.
     78 
     79 ```c++
     80 static void BM_SetInsert(benchmark::State& state) {
     81   while (state.KeepRunning()) {
     82     state.PauseTiming();
     83     std::set<int> data = ConstructRandomSet(state.range(0));
     84     state.ResumeTiming();
     85     for (int j = 0; j < state.range(1); ++j)
     86       data.insert(RandomNumber());
     87   }
     88 }
     89 BENCHMARK(BM_SetInsert)
     90     ->Args({1<<10, 1})
     91     ->Args({1<<10, 8})
     92     ->Args({1<<10, 64})
     93     ->Args({1<<10, 512})
     94     ->Args({8<<10, 1})
     95     ->Args({8<<10, 8})
     96     ->Args({8<<10, 64})
     97     ->Args({8<<10, 512});
     98 ```
     99 
    100 The preceding code is quite repetitive, and can be replaced with the following
    101 short-hand. The following macro will pick a few appropriate arguments in the
    102 product of the two specified ranges and will generate a benchmark for each such
    103 pair.
    104 
    105 ```c++
    106 BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {1, 512}});
    107 ```
    108 
    109 For more complex patterns of inputs, passing a custom function to `Apply` allows
    110 programmatic specification of an arbitrary set of arguments on which to run the
    111 benchmark. The following example enumerates a dense range on one parameter,
    112 and a sparse range on the second.
    113 
    114 ```c++
    115 static void CustomArguments(benchmark::internal::Benchmark* b) {
    116   for (int i = 0; i <= 10; ++i)
    117     for (int j = 32; j <= 1024*1024; j *= 8)
    118       b->Args({i, j});
    119 }
    120 BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
    121 ```
    122 
    123 ### Calculate asymptotic complexity (Big O)
    124 Asymptotic complexity might be calculated for a family of benchmarks. The
    125 following code will calculate the coefficient for the high-order term in the
    126 running time and the normalized root-mean square error of string comparison.
    127 
    128 ```c++
    129 static void BM_StringCompare(benchmark::State& state) {
    130   std::string s1(state.range(0), '-');
    131   std::string s2(state.range(0), '-');
    132   while (state.KeepRunning()) {
    133     benchmark::DoNotOptimize(s1.compare(s2));
    134   }
    135   state.SetComplexityN(state.range(0));
    136 }
    137 BENCHMARK(BM_StringCompare)
    138     ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
    139 ```
    140 
    141 As shown in the following invocation, asymptotic complexity might also be
    142 calculated automatically.
    143 
    144 ```c++
    145 BENCHMARK(BM_StringCompare)
    146     ->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
    147 ```
    148 
    149 The following code will specify asymptotic complexity with a lambda function,
    150 that might be used to customize high-order term calculation.
    151 
    152 ```c++
    153 BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
    154     ->Range(1<<10, 1<<18)->Complexity([](int n)->double{return n; });
    155 ```
    156 
    157 ### Templated benchmarks
    158 Templated benchmarks work the same way: This example produces and consumes
    159 messages of size `sizeof(v)` `range_x` times. It also outputs throughput in the
    160 absence of multiprogramming.
    161 
    162 ```c++
    163 template <class Q> int BM_Sequential(benchmark::State& state) {
    164   Q q;
    165   typename Q::value_type v;
    166   while (state.KeepRunning()) {
    167     for (int i = state.range(0); i--; )
    168       q.push(v);
    169     for (int e = state.range(0); e--; )
    170       q.Wait(&v);
    171   }
    172   // actually messages, not bytes:
    173   state.SetBytesProcessed(
    174       static_cast<int64_t>(state.iterations())*state.range(0));
    175 }
    176 BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
    177 ```
    178 
    179 Three macros are provided for adding benchmark templates.
    180 
    181 ```c++
    182 #if __cplusplus >= 201103L // C++11 and greater.
    183 #define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
    184 #else // C++ < C++11
    185 #define BENCHMARK_TEMPLATE(func, arg1)
    186 #endif
    187 #define BENCHMARK_TEMPLATE1(func, arg1)
    188 #define BENCHMARK_TEMPLATE2(func, arg1, arg2)
    189 ```
    190 
    191 ## Passing arbitrary arguments to a benchmark
    192 In C++11 it is possible to define a benchmark that takes an arbitrary number
    193 of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
    194 macro creates a benchmark that invokes `func`  with the `benchmark::State` as
    195 the first argument followed by the specified `args...`.
    196 The `test_case_name` is appended to the name of the benchmark and
    197 should describe the values passed.
    198 
    199 ```c++
    200 template <class ...ExtraArgs>`
    201 void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
    202   [...]
    203 }
    204 // Registers a benchmark named "BM_takes_args/int_string_test` that passes
    205 // the specified values to `extra_args`.
    206 BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
    207 ```
    208 Note that elements of `...args` may refer to global variables. Users should
    209 avoid modifying global state inside of a benchmark.
    210 
    211 ## Using RegisterBenchmark(name, fn, args...)
    212 
    213 The `RegisterBenchmark(name, func, args...)` function provides an alternative
    214 way to create and register benchmarks.
    215 `RegisterBenchmark(name, func, args...)` creates, registers, and returns a
    216 pointer to a new benchmark with the specified `name` that invokes
    217 `func(st, args...)` where `st` is a `benchmark::State` object.
    218 
    219 Unlike the `BENCHMARK` registration macros, which can only be used at the global
    220 scope, the `RegisterBenchmark` can be called anywhere. This allows for
    221 benchmark tests to be registered programmatically.
    222 
    223 Additionally `RegisterBenchmark` allows any callable object to be registered
    224 as a benchmark. Including capturing lambdas and function objects. This
    225 allows the creation
    226 
    227 For Example:
    228 ```c++
    229 auto BM_test = [](benchmark::State& st, auto Inputs) { /* ... */ };
    230 
    231 int main(int argc, char** argv) {
    232   for (auto& test_input : { /* ... */ })
    233       benchmark::RegisterBenchmark(test_input.name(), BM_test, test_input);
    234   benchmark::Initialize(&argc, argv);
    235   benchmark::RunSpecifiedBenchmarks();
    236 }
    237 ```
    238 
    239 ### Multithreaded benchmarks
    240 In a multithreaded test (benchmark invoked by multiple threads simultaneously),
    241 it is guaranteed that none of the threads will start until all have called
    242 `KeepRunning`, and all will have finished before KeepRunning returns false. As
    243 such, any global setup or teardown can be wrapped in a check against the thread
    244 index:
    245 
    246 ```c++
    247 static void BM_MultiThreaded(benchmark::State& state) {
    248   if (state.thread_index == 0) {
    249     // Setup code here.
    250   }
    251   while (state.KeepRunning()) {
    252     // Run the test as normal.
    253   }
    254   if (state.thread_index == 0) {
    255     // Teardown code here.
    256   }
    257 }
    258 BENCHMARK(BM_MultiThreaded)->Threads(2);
    259 ```
    260 
    261 If the benchmarked code itself uses threads and you want to compare it to
    262 single-threaded code, you may want to use real-time ("wallclock") measurements
    263 for latency comparisons:
    264 
    265 ```c++
    266 BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
    267 ```
    268 
    269 Without `UseRealTime`, CPU time is used by default.
    270 
    271 
    272 ## Manual timing
    273 For benchmarking something for which neither CPU time nor real-time are
    274 correct or accurate enough, completely manual timing is supported using
    275 the `UseManualTime` function. 
    276 
    277 When `UseManualTime` is used, the benchmarked code must call
    278 `SetIterationTime` once per iteration of the `KeepRunning` loop to
    279 report the manually measured time.
    280 
    281 An example use case for this is benchmarking GPU execution (e.g. OpenCL
    282 or CUDA kernels, OpenGL or Vulkan or Direct3D draw calls), which cannot
    283 be accurately measured using CPU time or real-time. Instead, they can be
    284 measured accurately using a dedicated API, and these measurement results
    285 can be reported back with `SetIterationTime`.
    286 
    287 ```c++
    288 static void BM_ManualTiming(benchmark::State& state) {
    289   int microseconds = state.range(0);
    290   std::chrono::duration<double, std::micro> sleep_duration {
    291     static_cast<double>(microseconds)
    292   };
    293 
    294   while (state.KeepRunning()) {
    295     auto start = std::chrono::high_resolution_clock::now();
    296     // Simulate some useful workload with a sleep
    297     std::this_thread::sleep_for(sleep_duration);
    298     auto end   = std::chrono::high_resolution_clock::now();
    299 
    300     auto elapsed_seconds =
    301       std::chrono::duration_cast<std::chrono::duration<double>>(
    302         end - start);
    303 
    304     state.SetIterationTime(elapsed_seconds.count());
    305   }
    306 }
    307 BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
    308 ```
    309 
    310 ### Preventing optimisation
    311 To prevent a value or expression from being optimized away by the compiler
    312 the `benchmark::DoNotOptimize(...)` and `benchmark::ClobberMemory()`
    313 functions can be used.
    314 
    315 ```c++
    316 static void BM_test(benchmark::State& state) {
    317   while (state.KeepRunning()) {
    318       int x = 0;
    319       for (int i=0; i < 64; ++i) {
    320         benchmark::DoNotOptimize(x += i);
    321       }
    322   }
    323 }
    324 ```
    325 
    326 `DoNotOptimize(<expr>)` forces the  *result* of `<expr>` to be stored in either
    327 memory or a register. For GNU based compilers it acts as read/write barrier
    328 for global memory. More specifically it forces the compiler to flush pending
    329 writes to memory and reload any other values as necessary.
    330 
    331 Note that `DoNotOptimize(<expr>)` does not prevent optimizations on `<expr>`
    332 in any way. `<expr>` may even be removed entirely when the result is already
    333 known. For example:
    334 
    335 ```c++
    336   /* Example 1: `<expr>` is removed entirely. */
    337   int foo(int x) { return x + 42; }
    338   while (...) DoNotOptimize(foo(0)); // Optimized to DoNotOptimize(42);
    339 
    340   /*  Example 2: Result of '<expr>' is only reused */
    341   int bar(int) __attribute__((const));
    342   while (...) DoNotOptimize(bar(0)); // Optimized to:
    343   // int __result__ = bar(0);
    344   // while (...) DoNotOptimize(__result__);
    345 ```
    346 
    347 The second tool for preventing optimizations is `ClobberMemory()`. In essence
    348 `ClobberMemory()` forces the compiler to perform all pending writes to global
    349 memory. Memory managed by block scope objects must be "escaped" using
    350 `DoNotOptimize(...)` before it can be clobbered. In the below example
    351 `ClobberMemory()` prevents the call to `v.push_back(42)` from being optimized
    352 away.
    353 
    354 ```c++
    355 static void BM_vector_push_back(benchmark::State& state) {
    356   while (state.KeepRunning()) {
    357     std::vector<int> v;
    358     v.reserve(1);
    359     benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
    360     v.push_back(42);
    361     benchmark::ClobberMemory(); // Force 42 to be written to memory.
    362   }
    363 }
    364 ```
    365 
    366 Note that `ClobberMemory()` is only available for GNU based compilers.
    367 
    368 ### Set time unit manually
    369 If a benchmark runs a few milliseconds it may be hard to visually compare the
    370 measured times, since the output data is given in nanoseconds per default. In
    371 order to manually set the time unit, you can specify it manually:
    372 
    373 ```c++
    374 BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
    375 ```
    376 
    377 ## Controlling number of iterations
    378 In all cases, the number of iterations for which the benchmark is run is
    379 governed by the amount of time the benchmark takes. Concretely, the number of
    380 iterations is at least one, not more than 1e9, until CPU time is greater than
    381 the minimum time, or the wallclock time is 5x minimum time. The minimum time is
    382 set as a flag `--benchmark_min_time` or per benchmark by calling `MinTime` on
    383 the registered benchmark object.
    384 
    385 ## Reporting the mean and standard devation by repeated benchmarks
    386 By default each benchmark is run once and that single result is reported.
    387 However benchmarks are often noisy and a single result may not be representative
    388 of the overall behavior. For this reason it's possible to repeatedly rerun the
    389 benchmark.
    390 
    391 The number of runs of each benchmark is specified globally by the
    392 `--benchmark_repetitions` flag or on a per benchmark basis by calling
    393 `Repetitions` on the registered benchmark object. When a benchmark is run
    394 more than once the mean and standard deviation of the runs will be reported.
    395 
    396 Additionally the `--benchmark_report_aggregates_only={true|false}` flag or
    397 `ReportAggregatesOnly(bool)` function can be used to change how repeated tests
    398 are reported. By default the result of each repeated run is reported. When this
    399 option is 'true' only the mean and standard deviation of the runs is reported.
    400 Calling `ReportAggregatesOnly(bool)` on a registered benchmark object overrides
    401 the value of the flag for that benchmark.
    402 
    403 ## Fixtures
    404 Fixture tests are created by
    405 first defining a type that derives from ::benchmark::Fixture and then
    406 creating/registering the tests using the following macros:
    407 
    408 * `BENCHMARK_F(ClassName, Method)`
    409 * `BENCHMARK_DEFINE_F(ClassName, Method)`
    410 * `BENCHMARK_REGISTER_F(ClassName, Method)`
    411 
    412 For Example:
    413 
    414 ```c++
    415 class MyFixture : public benchmark::Fixture {};
    416 
    417 BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
    418    while (st.KeepRunning()) {
    419      ...
    420   }
    421 }
    422 
    423 BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
    424    while (st.KeepRunning()) {
    425      ...
    426   }
    427 }
    428 /* BarTest is NOT registered */
    429 BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
    430 /* BarTest is now registered */
    431 ```
    432 
    433 ## Exiting Benchmarks in Error
    434 
    435 When errors caused by external influences, such as file I/O and network
    436 communication, occur within a benchmark the
    437 `State::SkipWithError(const char* msg)` function can be used to skip that run
    438 of benchmark and report the error. Note that only future iterations of the
    439 `KeepRunning()` are skipped. Users may explicitly return to exit the
    440 benchmark immediately.
    441 
    442 The `SkipWithError(...)` function may be used at any point within the benchmark,
    443 including before and after the `KeepRunning()` loop.
    444 
    445 For example:
    446 
    447 ```c++
    448 static void BM_test(benchmark::State& state) {
    449   auto resource = GetResource();
    450   if (!resource.good()) {
    451       state.SkipWithError("Resource is not good!");
    452       // KeepRunning() loop will not be entered.
    453   }
    454   while (state.KeepRunning()) {
    455       auto data = resource.read_data();
    456       if (!resource.good()) {
    457         state.SkipWithError("Failed to read data!");
    458         break; // Needed to skip the rest of the iteration.
    459      }
    460      do_stuff(data);
    461   }
    462 }
    463 ```
    464 
    465 ## Running a subset of the benchmarks
    466 
    467 The `--benchmark_filter=<regex>` option can be used to only run the benchmarks
    468 which match the specified `<regex>`. For example:
    469 
    470 ```bash
    471 $ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
    472 Run on (1 X 2300 MHz CPU )
    473 2016-06-25 19:34:24
    474 Benchmark              Time           CPU Iterations
    475 ----------------------------------------------------
    476 BM_memcpy/32          11 ns         11 ns   79545455
    477 BM_memcpy/32k       2181 ns       2185 ns     324074
    478 BM_memcpy/32          12 ns         12 ns   54687500
    479 BM_memcpy/32k       1834 ns       1837 ns     357143
    480 ```
    481 
    482 
    483 ## Output Formats
    484 The library supports multiple output formats. Use the
    485 `--benchmark_format=<console|json|csv>` flag to set the format type. `console`
    486 is the default format.
    487 
    488 The Console format is intended to be a human readable format. By default
    489 the format generates color output. Context is output on stderr and the 
    490 tabular data on stdout. Example tabular output looks like:
    491 ```
    492 Benchmark                               Time(ns)    CPU(ns) Iterations
    493 ----------------------------------------------------------------------
    494 BM_SetInsert/1024/1                        28928      29349      23853  133.097kB/s   33.2742k items/s
    495 BM_SetInsert/1024/8                        32065      32913      21375  949.487kB/s   237.372k items/s
    496 BM_SetInsert/1024/10                       33157      33648      21431  1.13369MB/s   290.225k items/s
    497 ```
    498 
    499 The JSON format outputs human readable json split into two top level attributes.
    500 The `context` attribute contains information about the run in general, including
    501 information about the CPU and the date.
    502 The `benchmarks` attribute contains a list of ever benchmark run. Example json
    503 output looks like:
    504 ``` json
    505 {
    506   "context": {
    507     "date": "2015/03/17-18:40:25",
    508     "num_cpus": 40,
    509     "mhz_per_cpu": 2801,
    510     "cpu_scaling_enabled": false,
    511     "build_type": "debug"
    512   },
    513   "benchmarks": [
    514     {
    515       "name": "BM_SetInsert/1024/1",
    516       "iterations": 94877,
    517       "real_time": 29275,
    518       "cpu_time": 29836,
    519       "bytes_per_second": 134066,
    520       "items_per_second": 33516
    521     },
    522     {
    523       "name": "BM_SetInsert/1024/8",
    524       "iterations": 21609,
    525       "real_time": 32317,
    526       "cpu_time": 32429,
    527       "bytes_per_second": 986770,
    528       "items_per_second": 246693
    529     },
    530     {
    531       "name": "BM_SetInsert/1024/10",
    532       "iterations": 21393,
    533       "real_time": 32724,
    534       "cpu_time": 33355,
    535       "bytes_per_second": 1199226,
    536       "items_per_second": 299807
    537     }
    538   ]
    539 }
    540 ```
    541 
    542 The CSV format outputs comma-separated values. The `context` is output on stderr
    543 and the CSV itself on stdout. Example CSV output looks like:
    544 ```
    545 name,iterations,real_time,cpu_time,bytes_per_second,items_per_second,label
    546 "BM_SetInsert/1024/1",65465,17890.7,8407.45,475768,118942,
    547 "BM_SetInsert/1024/8",116606,18810.1,9766.64,3.27646e+06,819115,
    548 "BM_SetInsert/1024/10",106365,17238.4,8421.53,4.74973e+06,1.18743e+06,
    549 ```
    550 
    551 ## Output Files
    552 The library supports writing the output of the benchmark to a file specified
    553 by `--benchmark_out=<filename>`. The format of the output can be specified
    554 using `--benchmark_out_format={json|console|csv}`. Specifying
    555 `--benchmark_out` does not suppress the console output.
    556 
    557 ## Debug vs Release
    558 By default, benchmark builds as a debug library. You will see a warning in the output when this is the case. To build it as a release library instead, use:
    559 
    560 ```
    561 cmake -DCMAKE_BUILD_TYPE=Release
    562 ```
    563 
    564 To enable link-time optimisation, use
    565 
    566 ```
    567 cmake -DCMAKE_BUILD_TYPE=Release -DBENCHMARK_ENABLE_LTO=true
    568 ```
    569 
    570 ## Linking against the library
    571 When using gcc, it is necessary to link against pthread to avoid runtime exceptions.
    572 This is due to how gcc implements std::thread.
    573 See [issue #67](https://github.com/google/benchmark/issues/67) for more details.
    574 
    575 ## Compiler Support
    576 
    577 Google Benchmark uses C++11 when building the library. As such we require
    578 a modern C++ toolchain, both compiler and standard library.
    579 
    580 The following minimum versions are strongly recommended build the library:
    581 
    582 * GCC 4.8
    583 * Clang 3.4
    584 * Visual Studio 2013
    585 
    586 Anything older *may* work.
    587 
    588 Note: Using the library and its headers in C++03 is supported. C++11 is only
    589 required to build the library.
    590 
    591 # Known Issues
    592 
    593 ### Windows
    594 
    595 * Users must manually link `shlwapi.lib`. Failure to do so may result
    596 in unresolved symbols.
    597 
    598 

README.version

      1 URL: https://github.com/google/benchmark
      2 Version: 8da907c2c2786685c7da9f4759de052e3990f6f1
      3 BugComponent: 119451
      4 Owners: enh, android-bionic
      5