1 page.title=Automating the tests 2 @jd:body 3 4 <!-- 5 Copyright 2015 The Android Open Source Project 6 7 Licensed under the Apache License, Version 2.0 (the "License"); 8 you may not use this file except in compliance with the License. 9 You may obtain a copy of the License at 10 11 http://www.apache.org/licenses/LICENSE-2.0 12 13 Unless required by applicable law or agreed to in writing, software 14 distributed under the License is distributed on an "AS IS" BASIS, 15 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 See the License for the specific language governing permissions and 17 limitations under the License. 18 --> 19 20 <div id="qv-wrapper"> 21 <div id="qv"> 22 <h2>In this document</h2> 23 <ol id="auto-toc"> 24 </ol> 25 </div> 26 </div> 27 28 <h2 id=intro>Introduction</h2> 29 30 <p>Deqp test modules can be integrated to automated test systems in multiple ways. 31 The best approach depends on the existing test infrastructure and target 32 environment.</p> 33 34 <p>The primary output from a test run is always the test log file, i.e. the file 35 with a <code>.qpa</code> postfix. Full test results can be parsed from the test log. Console output is 36 debug information only and may not be available on all platforms.</p> 37 38 <p>Test binaries can be invoked directly from a test automation system. The test 39 binary can be launched for a specific case, for a test set, or for all 40 available tests. If a fatal error occurs during execution (such as certain API 41 errors or a crash), the test execution will abort. For regression testing, the 42 best approach is to invoke the test binaries for individual cases or small test 43 sets separately, in order to have partial results available even in the event 44 of hard failure.</p> 45 46 <p>The deqp comes with command line test execution tools that can be used in 47 combination with the execution service to achieve a more robust integration. 48 The executor detects test process termination and will resume test execution on 49 the next available case. A single log file is produced from the full test 50 session. This setup is ideal for lightweight test systems that dont provide 51 crash recovery facilities.</p> 52 53 <h2 id=command_line_test_execution_tools>Command line test execution tools</h2> 54 55 <p>The current command line tool set includes a remote test execution tool, a test 56 log comparison generator for regression analysis, a test-log-to-CSV converter, 57 a test-log-to-XML converter, and a testlog-to-JUnit converter.</p> 58 59 <p>The source code for these tools is in the <code>executor</code> directory, and the binaries are built into the <code><builddir>/executor</code> directory.</p> 60 61 <h3 id=command_line_test_executor>Command line Test Executor</h3> 62 63 <p>The command line Test Executor is a portable C++ tool for launching a test run 64 on a device and collecting the resulting logs from it over TCP/IP. The Executor 65 communicates with the execution service (execserver) on the target device. 66 Together they provide functionality such as recovery from test process crashes. 67 The following examples demonstrate how to use the command line Test Executor 68 (use <code>--help</code> for more details):</p> 69 70 <h4 id=example_1_run_gles2_functional_tests>Example 1: Run GLES2 functional tests on an Android device:</h4> 71 72 <pre> 73 executor --connect=127.0.0.1 --port=50016 --binaryname= 74 com.drawelements.deqp/android.app.NativeActivity 75 --caselistdir=caselists 76 --testset=dEQP-GLES2.* --out=BatchResult.qpa 77 --cmdline="--deqp-crashhandler=enable --deqp-watchdog=enable 78 --deqp-gl-config-name=rgba8888d24s8" 79 </pre> 80 81 <h4 id=example_2_continue_a_partial_opengl>Example 2: Continue a partial OpenGL ES 2 test run locally:</h4> 82 83 <pre> 84 executor --start-server=execserver/execserver --port=50016 85 --binaryname=deqp-gles2 --workdir=modules/opengl 86 --caselistdir=caselists 87 --testset=dEQP-GLES2.* --exclude=dEQP-GLES2.performance.* --in=BatchResult.qpa 88 --out=BatchResult.qpa 89 </pre> 90 91 <h3 id=test_log_csv_export_and_compare>Test log CSV export and compare</h3> 92 93 <p>The deqp has a tool for converting test logs (.<code>qpa </code>files) into CSV files. The CSV output contains a list of test cases and their 94 results. The tool can also compare two or more batch results and list only the 95 test cases that have different status codes in the input batch results. The 96 comparison will also print the number of matching cases.</p> 97 98 <p>The output in CSV format is very practical for further processing with standard 99 command line utilities or with a spreadsheet editor. An additional, human-readable, 100 plain-text format can be selected using the following command line argument: <code>--format=text</code></p> 101 102 <h4 id=example_1_export_test_log_in_csv_format>Example 1: Export test log in CSV format</h4> 103 104 <pre> 105 testlog-to-csv --value=code BatchResult.qpa > Result_statuscodes.csv 106 testlog-to-csv --value=details BatchResult.qpa > Result_statusdetails.csv 107 </pre> 108 109 <h4 id=example_2_list_differences>Example 2: List differences of test results between two test logs</h4> 110 111 <pre> 112 testlog-to-csv --mode=diff --format=text Device_v1.qpa Device_v2.qpa 113 </pre> 114 115 <p class="note"><strong>Note:</strong> The argument <code>--value=code</code> outputs the test result code, such as "Pass" or "Fail". The argument <code>--value=details</code> selects the further explanation of the result or numerical value produced by a performance, capability, or accuracy test.</p> 116 117 <h3 id=test_log_xml_export>Test log XML export</h3> 118 119 <p>Test log files can be converted to valid XML documents using the <code>testlog-to-xml</code> utility. Two output modes are supported: </p> 120 121 <ul> 122 <li> Separate documents mode, where each test case and the <code>caselist.xml</code> summary document are written to a destination directory 123 <li> Single file mode, where all results in the <code>.qpa</code> file are written to single XML document. 124 </ul> 125 126 <p>Exported test log files can be viewed in a browser using an XML style sheet. 127 Sample style sheet documents (<code>testlog.xsl</code> and <code>testlog.css</code>) are provided in the <code>doc/testlog-stylesheet</code> directory. To render the log files in a browser, copy the two style sheet 128 files into the same directory where the exported XML documents are located.</p> 129 130 <p>If you are using Google Chrome, the files must be accessed over HTTP as Chrome 131 limits local file access for security reasons. The standard Python installation 132 includes a basic HTTP server that can be launched to serve the current 133 directory with the <code>python m SimpleHTTPServer 8000</code> command. After launching the server, just point the Chrome browser to <code>http://localhost:8000</code> to view the test log.</p> 134 135 <h3 id=conversion_to_a_junit_test_log>Conversion to a JUnit test log</h3> 136 137 <p>Many test automation systems can generate test run result reports from JUnit 138 output. The deqp test log files can be converted to the JUnit output format 139 using the testlog-to-junit tool. </p> 140 141 <p>The tool currently supports translating the test case verdict only. As JUnit 142 supports only "pass" and "fail" results, a passing result of the deqp is mapped 143 to "JUnit pass" and other results are considered failures. The original deqp 144 result code is available in the JUnit output. Other data, such as log messages 145 and result images, are not preserved in the conversion.</p> 146