Home | History | Annotate | Download | only in docs
      1 # Chrome Performance Dashboard Data Format
      2 
      3 ## Recommended Format: Dashboard JSON v1
      4 
      5 The endpoint that accepts new points
      6 (`https://chromeperf.appspot.com/add_point`) accepts HTTP POST
      7 requests. With the POST request, there should be one parameter given,
      8 called "data", the value of which is JSON which contains all of the data
      9 being uploaded.
     10 
     11 Example:
     12 
     13 ```javascript
     14 {
     15   "master": "master.chromium.perf",
     16   "bot": "linux-release",
     17   "point_id": 123456,
     18   "versions": {
     19     "version type": "version string"
     20   },
     21   "supplemental": {
     22     "field name": "supplemental data string",
     23     "default_rev": "r_chrome_version"
     24   },
     25   "chart_data": {/*... as output by Telemetry; see below ...*/}
     26 }
     27 ```
     28 
     29 Fields:
     30 
     31  * `master` (string): Buildbot master name or top-level category for data.
     32  * `bot` (string): Buildbot builder name, or another string that
     33  represents platform type.
     34  * `test_suite_name` (string): A string to use in the perf dashboard test
     35  * path after master/bot. Can contain slashes.
     36  * `format_version` (string): Allows dashboard to know how to process
     37  the structure.
     38  * `revisions` (dict): Maps repo name to revision.
     39  * `supplemental` (dict): Unstructured key-value pairs which may be
     40  displayed on the dashboard. Used to describe bot hardware, OS,
     41  Chrome feature status, etc.
     42  * `chart_data` (dict): The chart JSON as output by Telemetry.
     43 
     44 ### Chart data:
     45 
     46 This contains all of the test results and any metadata that is stored with
     47 the test.
     48 
     49 ```json
     50 {
     51   "format_version": "1.0",
     52   "benchmark_name": "page_cycler.typical_25",
     53   "charts": {
     54     "warm_times": {
     55       "http://www.google.com/": {
     56         "type": "list_of_scalar_values",
     57         "values": [9, 9, 8, 9],
     58       },
     59       "http://www.yahoo.com/": {
     60         "type": "list_of_scalar_values",
     61         "values": [4, 5, 4, 4],
     62       },
     63       "summary": {
     64         "type": "list_of_scalar_values",
     65         "values": [13, 14, 12, 13],
     66         "file": "gs://..."
     67       },
     68     }
     69   }
     70 }
     71 ```
     72 
     73 Fields:
     74 
     75  * `charts`: [dict of string to dict] Maps a list of chart name strings
     76  to their data dicts.
     77  * `units`: [string] Units to display on the dashboard.
     78  * `traces`: [dict of string to dict] Maps a list of trace name strings
     79  to their trace dicts.
     80  * `type`: [string] `"scalar"`, `"list_of_scalar_values"` or `"histogram"`,
     81  which tells the dashboard how to interpret the rest of the fields.
     82  * `improvement_direction` (string): Either `"bigger_is_better"`, or
     83  `"smaller_is_better"`.
     84  * `summary`: A special trace name which denotes the trace in a chart which does
     85  not correspond to a specific page.
     86 
     87 ## Legacy Format
     88 
     89 This format is deprecated and should not be used for new clients.
     90 
     91 In the format described below, the value of "data" in the HTTP POST
     92 should be a JSON encoding of a list of points to add. Each point is a
     93 map of property names to values for that point.
     94 
     95 Example 1:
     96 
     97 ```json
     98 [
     99   {
    100     "master": "SenderType",
    101     "bot": "platform-type",
    102     "test": "my_test_suite/chart_name/trace_name",
    103     "revision": 1234,
    104     "value": 18.5
    105   }
    106 ]
    107 ```
    108 
    109 Required fields:
    110 
    111  * `master` (string), `bot` (string), `test` (string): These three
    112  fields in combination specify a particular "test path". The master and
    113  bot are supposed to be the Buildbot master name and slave `perf_id`,
    114  respectively, but if the tests aren't being run by Buildbot, these
    115  can be any descriptive strings which specify the test data origin
    116  (note master and bot names can't contain slashes, and none of these
    117  can contain asterisks).
    118  * `revision` (int): The point ID, used to index the data point. It
    119  doesn't actually have to be a "revision". Should be monotonically increasing
    120  for data in each series.
    121  * `value` (float): The Y-value for this point.
    122 
    123 Example 2 (including optional fields):
    124 
    125 ```json
    126 [
    127   {
    128     "master": "ChromiumPerf",
    129     "bot": "linux-release",
    130     "test": "sunspider/string-unpack-code/ref",
    131     "revision": 33241,
    132     "value": "18.5",
    133     "error": "0.5",
    134     "units": "ms",
    135     "masterid": "master.chromium.perf",
    136     "buildername": "Linux Builder",
    137     "buildnumber": 75,
    138     "supplemental_columns": {
    139       "r_webkit_rev": "167808",
    140       "a_default_rev": "r_webkit_rev"
    141     }
    142   },
    143   {
    144     "master": "ChromiumPerf",
    145     "bot": "linux-release",
    146     "test": "sunspider/string-unpack-code",
    147     "revision": 33241,
    148     "value": "18.4",
    149     "error": "0.489",
    150     "units": "ms",
    151     "masterid": "master.chromium.perf",
    152     "buildername": "Linux Builder",
    153     "buildnumber": 75,
    154     "supplemental_columns": {
    155       "r_webkit_rev": "167808",
    156       "a_default_rev": "r_webkit_rev"
    157     }
    158   }
    159 ]
    160 ```
    161 
    162 Optional fields:
    163 
    164  * `units` (string): The (y-axis) units for this point.
    165  * `error` (float): A standard error or standard deviation value.
    166  * `supplemental_columns`: A dictionary of other data associated with
    167  this point.
    168    * Properties starting with `r\_` are revision/version numbers.
    169    * Properties starting with `d\_` are extra data numbers.
    170    * Properties starting with `a\_` are extra metadata strings.
    171      * `a_default_rev`: The name of a another supplemental property key
    172      starting with "a_".
    173      * `a_stdio_uri`: Link to stdio logs for the test run.
    174  * `higher_is_better` (boolean). You can use this field to explicitly
    175  define improvement direction.
    176 
    177 ## Providing test and unit information
    178 
    179 Sending test descriptions are supported in with Dashboard JSON v1.
    180 Test descriptions for Telemetry tests are provided in code for the
    181 benchmarks, and are included by Telemetry in the chart JSON output.
    182 
    183 ## Relevant code links
    184 
    185 Implementations of code that sends data to the dashboard:
    186 
    187  * `chromium/build/scripts/slave/results_dashboard.py`
    188  * `chromiumos/src/third_party/autotest/files/tko/perf_upload/perf_uploader.py`
    189 
    190 ## Getting set up with new test results
    191 
    192 Once you're ready to start sending data to the real perf dashboard, there
    193 are a few more things you might want to do. Firstly, in order for the
    194 dashboard to accept the data, the IP of the sender must be whitelisted.
    195 
    196 If your data is not internal-only data, you can request that it be marked
    197 as such, again by filing an issue.
    198 
    199 Finally, if you want to monitor your the test results, you can decide
    200 which tests you want to be monitored, who should be receiving alerts, and
    201 whether you want to set any special thresholds for alerting.
    202 
    203 ## Contact
    204 
    205 In general, for questions or requests you can email
    206 chrome-perf-dashboard-team (a] google.com.
    207