Home | History | Annotate | Download | only in docs
      1 # Autotest for Chromium OS developers
      2 
      3 [TOC]
      4 
      5 ## Useful documents
      6 
      7 [Autotest documentation on GitHub](https://github.com/autotest/autotest/wiki/AutotestApi):
      8 This would be a good read if you want to familiarize yourself with the basic
      9 Autotest concepts.
     10 
     11 [Gentoo Portage ebuild/eclass Information](http://www.gentoo.org/proj/en/devrel/handbook/handbook.xml?part=2):
     12 Getting to know the package build system we use.
     13 
     14 [ChromiumOS specific Portage FAQ](http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq):
     15 Learning something about the way we use portage.
     16 
     17 ## Autotest and ebuild workflow
     18 
     19 To familiarize with autotest concepts, you should start with the upstream
     20 Autotest documentation at: https://github.com/autotest/autotest/wiki/AutotestApi
     21 
     22 The rest of this document is going to use some terms and only explain them
     23 vaguely.
     24 
     25 ### Overview
     26 
     27 At a high level, tests are organized in test cases, each test case being either
     28 server or client, with one main .py file named the same as the test case, and
     29 one or more control files. In order to be able to perform all tasks on a given
     30 test, autotest expects tests to be placed in a monolithic file structure
     31 of:
     32 
     33 -   `/client/tests/`
     34 -   `/client/site_tests/`
     35 -   `/server/tests/`
     36 -   `/server/site_tests/`
     37 
     38 Each test directory has to have at least a control file, but typically also has
     39 a main job module (named the same as the test case). Furthermore, if it needs
     40 any additional files checked in, they are typically placed in a `files/`
     41 directory, and separate projects that can be built with a Makefile inside the
     42 `src/` directory.
     43 
     44 Due to structural limitations in Chromium OS, it is not possible to store all
     45 test cases in this structure in a single large source repository as upstream
     46 autotest source would (placed at `third_party/autotest/files/` in Chromium OS).
     47 In particular, the following has been required in the past:
     48 
     49 -   Having confidential (publicly inaccessible) tests or generally per-test ACLs
     50     for sharing only with a particular partner only.
     51 -   Storing test cases along with the project they wrap around, because the test
     52     requires binaries built as a by-product of the projects own build system.
     53     (e.g.  chrome or tpm tests)
     54 
     55 Furthermore, it has been desired to generally build everything that is not
     56 strongly ordered in parallel, significantly decreasing build times. That,
     57 however, requires proper dependency tree declaration and being able to specify
     58 which test cases require what dependencies, in addition to being able to
     59 process different "independent" parts of a single source repository in
     60 parallel.
     61 
     62 This leads to the ebuild workflow, which generally allows compositing any
     63 number of sources in any format into a single monolithic tree, whose contents
     64 depend on build parameters.
     65 
     66 ![ebuild workflow](./atest-diagram.png)
     67 
     68 This allows using standard autotest workflow without any change, however,
     69 unlike what upstream does, the tests arent run directly from the source
     70 repository, rather from a staging read-only install location. This leads to
     71 certain differences in workflow:
     72 
     73 -   Source may live in an arbitrary location or can be generated on the fly.
     74     Anything that can be created as an ebuild (shell script) can be a test source.
     75     (cros-workon may be utilised, introducing a fairly standard Chromium OS
     76     project workflow)
     77 -   The staging location (`/build/${board}/usr/local/autotest/`) may not be
     78     modified; if one wants to modify it, they have to find the source to it
     79     (using other tools, see FAQ).
     80 -   Propagating source changes requires an emerge step.
     81 
     82 ### Ebuild setup, autotest eclass
     83 
     84 **NOTE**: This assumes some basic knowledge of how ebuilds in Chromium OS work.
     85 Further documentation is available at http://www.chromium.org/chromium-os/how-tos-and-troubleshooting/portage-build-faq
     86 
     87 An **autotest ebuild** is an ebuild that produces test cases and installs them into
     88 the staging area. It has three general tasks:
     89 
     90 -   Obtain the source - This is generally (but not necessarily) provided by
     91     cros-workon eclass. It could also work with the more standard tarball
     92     SRC_URI pathway or generally any shell code executed in `src_unpack()`.
     93 -   Prepare test cases - This includes, but is not limited to preprocessing any
     94     source, copying source files or intermediate binaries into the expected
     95     locations, where they will be taken over by autotest code, specifically the
     96     `setup()` function of the appropriate test. Typically, this is not needed.
     97 -   Call autotest to "build" all sources and subsequently install them - This
     98     should be done exclusively by inheriting the **autotest eclass**, which
     99     bundles up all the necessary code to install into the intermediate location.
    100 
    101 **Autotest eclass** is inherited by all autotest ebuilds, only requires a
    102 number of variables specified and works by itself otherwise. Most variables
    103 describe the locations and listings of work that needs to be done:
    104 
    105 -   Location variables define the paths to directories containing the test
    106 files:
    107 
    108     -   `AUTOTEST_{CLIENT,SERVER}_{TESTS,SITE_TESTS}`
    109     -   `AUTOTEST_{DEPS,PROFILERS,CONFIG}`
    110 
    111     These typically only need to be specified if they differ from the defaults
    112     (which follow the upstream directory structure)
    113 
    114 -   List variables (`AUTOTEST_*_LIST`) define the list of deps, profilers,
    115     configs that should be handled by this ebuild.
    116 -   IUSE test list specification TESTS=, is a USE_EXPANDed specification of
    117     tests managed by the given ebuild. By virtue of being an IUSE variable, all
    118     of the options are visible as USE flag toggles while building the ebuild,
    119     unlike with list variables which are a given and the ebuild has to be
    120     modified for those to change.
    121 
    122 Each ebuild usually operates on a single source repository. That does not
    123 always have to hold true, however, and in case of autotest, many ebuilds check
    124 out the sources of the same source repository (*autotest.git*). Invariably, this
    125 means that they have to be careful to not install the same files and split the
    126 sources between themselves to avoid file install collisions.
    127 If more than one autotest ebuild operates on the same source repository, they
    128 **have to** use the above variables to define mutually exclusive slices in order
    129 to not collide during installation. Generally, if we have a source repository
    130 with client site_tests A and B, you can have either:
    131 
    132 -   one ebuild with IUSE_TESTS="+tests_A +tests_B"
    133 -   two different ebuilds, one with IUSE_TESTS="+tests_A", the other with
    134     IUSE_TESTS="+tests_B"
    135 
    136 As soon as an overlap between ebuilds happens, either an outside mechanism has
    137 to ensure the overlapping tests are never enabled at the same time, or file
    138 collisions happen.
    139 
    140 
    141 ## Building tests
    142 
    143 Fundamentally, a test has two main phases:
    144 
    145 -   `run_*()` - This is is the main part that performs all testing and is
    146     invoked by the control files, once or repeatedly.
    147 -   `setup()` - This function, present in the test cases main .py file is
    148     supposed to prepare the test for running. This includes building any
    149     binaries, initializing data, etc.
    150 
    151 During building using emerge, autotest will call a `setup()` function of all
    152 test cases/deps involved. This is supposed to prepare everything. Typically,
    153 this will invoke make on a Makefile present in the tests src/ directory, but
    154 can involve any other transformation of sources (also be empty if theres
    155 nothing to build).
    156 **Note**, however, that `setup()` is implicitly called many times as test
    157 initialization even during `run_*()` step, so it should be a noop on reentry
    158 that merely verifies everything is in order.
    159 
    160 Unlike `run_*()` functions, `setup()` gets called both during the prepare phase
    161 which happens on the **host and target alike**. This creates a problem with
    162 code that is being depended on or directly executed during `setup()`. Python
    163 modules that are imported in any pathway leading to `setup()` are needed both
    164 in the host chroot and on the target board to properly support the test. Any
    165 binaries would need to be compiled using the host compiler and either ensured
    166 that they will be skipped on the target (incremental `setup()` runs) or
    167 cross-compiled again and dynamically chosen while running on target.
    168 
    169 **More importantly**, in Chromium OS scenario, doing any write operations
    170 inside the `setup()` function will lead to **access denied failures**, because
    171 tests are being run from the intermediate read-only location.
    172 
    173 Given the above, building is as easy as **emerge**-ing the autotest ebuild that
    174 contains our test.
    175 ```
    176 $ emerge-${board} ${test_ebuild}
    177 ```
    178 
    179 *Currently, tests are organized within these notable ebuilds*: (see
    180 [FAQ](#Q1_What-autotest-ebuilds-are-out-there_) full list):
    181 
    182 -   chromeos-base/autotest-tests - The main ebuild handling most of autotest.git
    183     repository and its client and server tests.
    184 -   chromeos-base/autotest-tests-* - Various ebuilds that build other parts of
    185     autotest.git
    186 -   chromeos-base/chromeos-chrome - chrome tests; the tests that are part of
    187     chrome
    188 
    189 ### Building tests selectively
    190 
    191 Test cases built by ebuilds generally come in large bundles. Sometimes, only a
    192 subset, or generally a different set of the tests provided by a given ebuild is
    193 desired. That is achieved using a
    194 [USE_EXPANDed](http://devmanual.gentoo.org/general-concepts/use-flags/index.html)
    195 flag called TESTS.
    196 
    197 All USE flags (and therefore tests) have a default state, either enabled (+) or
    198 disabled (-), specified directly in the ebuild, that can be manually overridden
    199 from the commandline. There are two ways to do that.
    200 
    201 -   Non-Incremental - Simply override the default selection by an entirely new
    202     selection, ignoring the defaults. This is useful if you develop a single
    203     test and dont want to waste time building the others.
    204 
    205         $ TESTS="test1 test2" emerge-${board} ${ebuild}
    206 
    207 -   Incremental - All USE_EXPAND flags are also accessible as USE flags, with
    208     the appropriate prefix, and can be used incrementally to selectively
    209     enable/disable tests in addition to the defaults. This can be useful if you
    210     aim to enable a test that is disabled by default and want to test locally.
    211 
    212         $ USE="test_to_be_enabled -test_to_be_disabled" emerge-${board} \
    213           ${ebuild}
    214 
    215 For operations across all tests, following incremental USE wildcard is
    216 supported by portage: "tests_*" to select all tests at once (or - to
    217 de-select).
    218 
    219 **NOTE**: Both Incremental and Non-Incremental methods can be set/overriden by
    220 (in this order): the ebuild (default values), make.profile, make.conf,
    221 /etc/portage, commandline (see above). That means that any settings provided on
    222 the emerge commandline override everything else.
    223 
    224 ## Running tests
    225 
    226 **NOTE**: In order to run tests on your device, it needs to have a
    227 [test-enabled image](#W4_Create-and-run-a-test-enabled-image-on-your-device).
    228 
    229 When running tests, fundamentally, you want to either:
    230 
    231 -   Run sets of tests manually - Use case: Developing test cases
    232 
    233     Take your local test sources, modify them, and then attempt to run them on a
    234     target machine using autotest. You are generally responsible for making sure
    235     that the machine is imaged to a test image, and the image contains all the
    236     dependencies needed to support your tests.
    237 
    238 -   Verify a given image - Use case: Developing the projects subject to testing
    239 
    240     Take an image, re-image the target device and run a test suite on it. This
    241     requires either use of build-time autotest artifacts or their reproduction
    242     by not modifying or resyncing your sources after an image has been built.
    243 
    244 ### Running tests on a machine
    245 
    246 Autotests are run with a tool called
    247 [test_that](https://chromium.googlesource.com/chromiumos/third_party/autotest/+/refs/heads/master/docs/test-that.md).
    248 
    249 ### Running tests in a VM - cros_run_vm_tests
    250 
    251 VM tests are conveniently wrapped into a script `cros_run_vm_tests` that sets up
    252 the VM using a given image and then calls `test_that`. This is run by builders
    253 to test using the Smoke suite.
    254 
    255 If you want to run your tests on the VM (see
    256 [here](https://www.chromium.org/chromium-os/how-tos-and-troubleshooting/running-chromeos-image-under-virtual-machines) for basic instructions for
    257 setting up KVM with cros images) be aware of the following:
    258 
    259 -   `cros_run_vm_test` starts up a VM and runs autotests using the port
    260 -   specified (defaults to 9222).  As an example:
    261 
    262         $ ./bin/cros_run_vm_test --test_case=suite_Smoke \
    263         --image_path=<my_image_to_start or don't set to use most recent build> \
    264         --board=x86-generic
    265 
    266 -   The emulator command line redirects localhost port 9222 to the emulated
    267     machine's port 22 to allow you to ssh into the emulator. For Chromium OS to
    268     actually listen on this port you must append the `--test_image` parameter
    269     when you run the `./image_to_vm.sh` script, or perhaps run the
    270     `mod_image_for_test.sh` script instead.
    271 -   You can then run tests on the correct ssh port with something like
    272 
    273         $ test_that --board=x86-generic localhost:9222 'f:.*platform_BootPerf/control'
    274 
    275 -   To set the sudo password run set_shared_user_password. Then within the
    276     emulator you can press Ctrl-Alt-T to get a terminal, and sudo using this
    277     password. This will also allow you to ssh into the unit with, e.g.
    278 
    279         $ ssh -p 9222 root@localhost
    280 
    281 -   Warning: After
    282     [crbug/710629](https://bugs.chromium.org/p/chromium/issues/detail?id=710629),
    283     'betty' is the only board regularly run through pre-CQ and CQ VMTest and so
    284     is the most likely to work at ToT. 'betty' is based on 'amd64-generic',
    285     though, so 'amd64-generic' is likely to also work for most (non-ARC) tests.
    286 
    287 
    288 ## Result log layout structure
    289 
    290 For information regarding the layout structure please refer to the following:
    291 [autotest-results-logs](https://www.chromium.org/chromium-os/testing/test-code-labs/autotest-client-tests/autotest-results-logs)
    292 
    293 ### Interpreting test results
    294 
    295 Running autotest will result in a lot of information going by which is probably
    296 not too informative if you have not used autotest before.  At the end of the
    297 `test_that` run, you will see a summary of pass/failure status, along with
    298 performance results:
    299 
    300 ```
    301 22:44:30 INFO | Using installation dir /home/autotest
    302 22:44:30 ERROR| Could not install autotest from repos
    303 22:44:32 INFO | Installation of autotest completed
    304 22:44:32 INFO | GOOD  ----  Autotest.install timestamp=1263509072 localtime=Jan 14 22:44:32
    305 22:44:33 INFO | Executing /home/autotest/bin/autotest /home/autotest/control phase 0
    306 22:44:36 INFO | START  ---- ----  timestamp=1263509075 localtime=Jan 14 14:44:35
    307 22:44:36 INFO |  START   sleeptest sleeptest timestamp=1263509076 localtime=Jan 14 14:44:36
    308 22:44:36 INFO | Bundling /usr/local/autotest/client/tests/sleeptest into test-sleeptest.tar.bz2
    309 22:44:40 INFO |   GOOD  sleeptest  sleeptest  timestamp=1263509079 localtime=Jan 14 14:44:39 completed successfully
    310 22:44:40 INFO |   END GOOD  sleeptest sleeptest  timestamp=1263509079 localtime=Jan 14 14:44:39
    311 22:44:42 INFO | END GOOD ---- ---- timestamp=1263509082 localtime=Jan 14 14:44:42
    312 22:44:44 INFO | Client complete
    313 22:44:45 INFO | Finished processing control file
    314 ```
    315 
    316 `test_that` will leave around a temp directory populated with diagnostic information:
    317 
    318 ```
    319 Finished running tests. Results can be found in /tmp/test_that_results_j8GoWH or /tmp/test_that_latest
    320 ```
    321 
    322 This directory will contain a directory per test run.  Each directory contains
    323 the logs pertaining to that test run.
    324 
    325 In that directory some interesting files are:
    326 
    327 ${TEST}/debug/client.DEBUG - the most detailed output from running the
    328 client-side test
    329 
    330 ### Running tests automatically, Suites
    331 
    332 Suites provide a mechanism to group tests together in test groups. They also
    333 serve as hooks for automated runs of tests verifying various builds. Most
    334 importantly, that is the BVT (board verification tests) and Smoke (a subset of
    335 BVT that can run in a VM).
    336 
    337 Please refer to the [suites documentation](https://www.chromium.org/chromium-os/testing/test-suites).
    338 
    339 ## Writing and developing tests
    340 
    341 ### Writing a test
    342 
    343 For understanding and writing the actual python code for autotest, please refer
    344 to the [Developer FAQ](http://www.chromium.org/chromium-os/testing/autotest-developer-faq#TOC-Writing-Autotests)
    345 
    346 Currently, all code should be placed in a standard layout inside the
    347 autotest.git repository, unless otherwise is necessary for technical reasons.
    348 Regardless, the following text assumes that code is placed in generally any
    349 repository.
    350 
    351 For a test to be fully functional in Chromium OS, it has to be associated with
    352 an ebuild. It is generally possible to run tests without an ebuild using
    353 `test_that` but discouraged, as the same will not function with other parts of
    354 the system.
    355 
    356 ### Making a new test work with ebuilds
    357 
    358 The choice of ebuild depends on the location of its sources. Structuring tests
    359 into more smaller ebuilds (as opposed to one ebuild per source repository)
    360 serves two purposes:
    361 
    362 -   Categorisation - Grouping similar tests together, possibly with deps they
    363     use exclusively.
    364 -   Parallelisation - Multiple independent ebuilds can build entirely in
    365     parallel.
    366 -   Dependency tracking - Larger bundles of tests depend on more system
    367     packages without proper resolution which dependency belongs to which test.
    368     This also increases paralellism.
    369 
    370 Current ebuild structure is largely a result of breaking off the biggest
    371 blockers for parallelism, ie. tests depending on chrome or similar packages,
    372 and as such, using any of the current ebuilds should be sufficient. (see FAQ
    373 for listing of ebuilds)
    374 
    375 After choosing the proper ebuild to add your test into, the test (in the form
    376 +tests_<testname>) needs to be added to IUSE_TESTS list that all autotest
    377 ebuilds have. Failing to do so will simply make ebuilds ignore your tests
    378 entirely. As with all USE flags, prepending it with + means the test will be
    379 enabled by default, and should be the default, unless you want to keep the test
    380 experimental for your own use, or turn the USE flag on explicitly by other
    381 means, eg. in a config for a particular board only.
    382 
    383 Should a **new ebuild** be started, it should be added to
    384 **chromeos-base/autotest-all** package, which is a meta-ebuild depending on all
    385 autotest ebuild packages that can be built. autotest-all is used by the build
    386 system to automatically build all tests that we have and therefore keep them
    387 from randomly breaking.
    388 
    389 ### Deps
    390 
    391 Autotest uses deps to provide a de-facto dependencies into the ecosystem. A dep
    392 is a directory in **client/deps** with a structure similar to a test case
    393 without a control file. A test case that depends on a dep will invoke the deps
    394 `setup()` function in its own `setup()` function and will be able to access the
    395 files provided by the dep. Note that autotest deps have nothing to do with
    396 system dependencies.
    397 
    398 As the calls to a dep are internal autotest code, it is not possible to
    399 automatically detect these and make them an inter-package dependencies on the
    400 ebuild level. For that reason, deps should either be
    401 [provided](#Ebuild-setup_autotest-eclass) by the same ebuild that builds test
    402 that consume them, or ebuild dependencies need to be declared manually between
    403 the dep ebuild and the test ebuild that uses it.  An **autotest-deponly**
    404 eclass exists to provide solution for ebuilds that build only deps and no
    405 tests. A number of deponly ebuilds already exist.
    406 
    407 Common deps are:
    408 
    409 -   chrome_test - Intending to use any of the test binaries produced by chrome.
    410 -   pyauto_dep - Using pyauto for your code.
    411 
    412 ### Test naming conventions
    413 
    414 Generally, the naming convention runs like this:
    415 
    416 \<component>\_\<TestName\>
    417 
    418 That convention names the directory containing the test code.  It also names
    419 the .py file containing the test code, and the class of the Autotest test.
    420 
    421 If there's only one control file, it's named control.  The test's NAME in the
    422 control file is \<component\>_\<TestName\>, like the directory and .py
    423 file.
    424 
    425 If there are multiple control files for a test, they are named
    426 control.\<testcase\>. These tests' NAMEs are then
    427 \<component\>_\<TestName\>.\<testcase\>.
    428 
    429 ## Common workflows
    430 
    431 ### W1. Develop and iterate on a test
    432 
    433 1.  Set up the environment.
    434 
    435         $ cd ~/trunk/src/third_party/autotest/files/
    436         $ export TESTS=<the test cases to iterate on>
    437         $ EBUILD=<the ebuild that contains TEST>
    438         $ board=<the board on which to develop>
    439 
    440 2.  Ensure cros_workon is started
    441 
    442         $ cros_workon --board=${board} start ${EBUILD}
    443         $ repo sync # Necessary only if you use minilayout.
    444 
    445 3.  Make modifications (on first run, you may want to just do 3,4 to verify
    446     everything works before you touch it \& break it)
    447 
    448         $ ...
    449 
    450 4.  Build test (TESTS= is not necessary if you exported it before)
    451 
    452         $ emerge-$board $EBUILD
    453 
    454 5.  Run test to make sure it works before you touch it
    455 
    456         $ test_that <machine IP> ${TESTS}
    457 
    458 6.  Go to 2) to iterate
    459 7.  Clean up environment
    460 
    461         $ cros_workon --board=${board} stop ${EBUILD}
    462         $ unset TESTS
    463 
    464 ### W2. Creating a test - steps and checklist
    465 
    466 When creating a test, the following steps should be done/verified.
    467 
    468 1.  Create the actual test directory, main test files/sources, at least one
    469     control file
    470 2.  Find the appropriate ebuild package and start working on it:
    471 
    472         $ cros_workon --board=${board} start <package>
    473 
    474 3.  Add the new test into the IUSE_TESTS list of 9999 ebuild
    475 4.  Try building: (make sure its the 9999 version being built)
    476 
    477         $ TESTS=<test> emerge-$board <package>
    478 
    479 5.  Try running:
    480 
    481         $ test_that <IP> <test>
    482 
    483 6.  Iterate on 4,5 and modify source until happy with the initial version.
    484 7.  Commit test source first, when it is safely in, commit the 9999 ebuild
    485     version change.
    486 8.  Cleanup
    487 
    488          $ cros_workon --board=${board} stop <package>
    489 
    490 ### W3. Splitting autotest ebuild into two
    491 
    492 Removing a test from one ebuild and adding to another in the same revision
    493 causes portage file collisions unless counter-measures are taken. Generally,
    494 some things routinely go wrong in this process, so this checklist should serve
    495 to help that.
    496 
    497 1.  We have ebuild **foo-0.0.1-r100** with **test** and would like to split
    498     that test off into ebuild **bar-0.0.1-r1**.
    499     Assume that:
    500     -   both ebuilds are using cros-workon (because its likely the case).
    501     -   foo is used globally (eg. autotest-all depends on it), rather than just
    502         some personal ebuild
    503 2.  Remove **test** from foo-{0.0.1-r100,9999}; uprev foo-0.0.1-r100 (to -r101)
    504 3.  Create bar-9999 (making a copy of foo and replacing IUSE_TESTS may be a good
    505     start), with IUSE_TESTS containing just the entry for **test**
    506 4.  Verify package dependencies of test. Make bar-9999 only depend on what is
    507     needed for test, remove the dependencies from foo-9999, unless they are
    508     needed by tests that remained.
    509 5.  Add a blocker. Since bar installs files owned by foo-0.0.1-r100 and earlier,
    510     the blockers format will be:
    511 
    512         RDEPEND="!<=foo-0.0.1-r100"
    513 
    514 6.  Add a dependency to the new version of bar into
    515     chromeos-base/autotest-all-0.0.1
    516 
    517         RDEPEND="bar"
    518 
    519 7.  Change the dependency of foo in chromeos-base/autotest-all-0.0.1 to be
    520     version locked to the new rev:
    521 
    522         RDEPEND=">foo-0.0.1-r100"
    523 
    524 8.  Uprev (move) autotest-all-0.0.1-rX symlink by one.
    525 9.  Publish all as the same change list, have it reviewed, push.
    526 
    527 ### W4. Create and run a test-enabled image on your device
    528 
    529 1.  Choose which board you want to build for (we'll refer to this as ${BOARD},
    530     which is for example "x86-generic").
    531 2.  Set up a proper portage build chroot setup.  Go through the normal process
    532     of setup_board if you haven't already.
    533 
    534         $ ./build_packages --board=${BOARD}
    535 
    536 3.  Build test image.
    537 
    538         $ ./build_image --board=${BOARD} test
    539 
    540 4.  Install the Chromium OS testing image to your target machine.  This is
    541     through the standard mechanisms: either USB, or by reimaging a device
    542     currently running a previously built Chromium OS image modded for test, or
    543     by entering the shell on the machine and forcing an auto update to your
    544     machine when it's running a dev server.  For clarity we'll walk through two
    545     common ways below, but if you already know about this, just do what you
    546     normally do.
    547 
    548     -   If you choose to use a USB boot, you first put the image on USB and run
    549         this from outside the chroot.
    550 
    551             $ ./image_to_usb.sh --to /dev/sdX --board=${BOARD} \
    552               --image_name=chromiumos_test_image.bin
    553 
    554     -   Alternatively, if you happen to already have a machine running an image
    555         modified for test and you know its IP address (${REMOTE_IP}), you can
    556         avoid using a USB key and reimage it with a freshly built image by
    557         running this from outside the chroot:
    558 
    559             $ ./image_to_live.sh --remote=${REMOTE_IP} \
    560               --image=`./get_latest_image.sh \
    561               --board=${BOARD}`/chromiumos_test_image.bin
    562 
    563 This will automatically start dev server, ssh to your machine, cause it to
    564 update to from that dev server using memento_updater, reboot, wait for reboot,
    565 print out the new version updated to, and shut down your dev server.
    566 
    567 ## Troubleshooting/FAQ
    568 
    569 ### Q1: What autotest ebuilds are out there?
    570 
    571 Note that the list of ebuilds may differ per board, as each board has
    572 potentially different list of overlays. To find all autotest ebuilds for board
    573 foo, you can run:
    574 ```
    575 $ board=foo
    576 $ for dir in $(portageq-${board} envvar PORTDIR_OVERLAY); do
    577      find . -name '*.ebuild' | xargs grep "inherit.*autotest" | grep "9999" | \
    578      cut -f1 -d: | \
    579      sed -e 's/.*\/\([^/]*\)\/\([^/]*\)\/.*\.ebuild/\1\/\2/'
    580    done
    581 ```
    582 (Getting: "WARNING: 'portageq envvar PORTDIR_OVERLAY' is deprecated. Use
    583 'portageq repositories_configuration' instead." Please fix documentation.)
    584 
    585 ### Q2: I see a test of the name greattests_TestsEverything in build output/logs/whatever! How do I find which ebuild builds it?
    586 
    587 All ebuilds have lists of tests exported as **USE_EXPANDed** lists called
    588 **TESTS**. An
    589 expanded use can be searched for in the same way as other use flags, but with
    590 the appropriate prefix, in this case, you would search for
    591 **tests_greattests_TestsEverything**:
    592 ```
    593 $ use_search=tests_greattests_TestsEverything
    594 $ equery-$board hasuse $use_search
    595  * Searching for USE flag tests_greattests_TestsEverything ...
    596  * [I-O] [  ] some_ebuild_package_name:0
    597 ```
    598 
    599 This will however only work on ebuilds which are **already installed**, ie.
    600 their potentially outdated versions.
    601 **Alternatively**, you can run a pretended emerge (emerge -p) of all autotest
    602 ebuilds and scan the output.
    603 ```
    604 $ emerge -p ${all_ebuilds_from_Q1} |grep -C 10 ${use_search}
    605 ```
    606 
    607 ### Q3: I have an ebuild foo, where are its sources?
    608 
    609 Generally speaking, one has to look at the ebuild source to figure that
    610 question out (and it shouldnt be hard). However, all present autotest ebuilds
    611 (at the time of this writing) are also cros-workon, and for those, this
    612 should always work:
    613 ```
    614 $ ebuild_search=foo
    615 $ ebuild $(equery-$board which $ebuild_search) info
    616 CROS_WORKON_SRCDIR=/home/you/trunk/src/third_party/foo
    617 CROS_WORKON_PROJECT=chromiumos/third_party/foo
    618 ```
    619 
    620 ### Q4: I have an ebuild, what tests does it build?
    621 
    622 You can run a pretended emerge on the ebuild and observe the TESTS=
    623 statement:
    624 ```
    625 $ ebuild_name=foo
    626 $ emerge-$board -pv ${ebuild_name}
    627 These are the packages that would be merged, in order:
    628 
    629 Calculating dependencies... done!
    630 [ebuild   R   ] foo-foo_version to /build/$board/ USE="autox hardened tpmtools
    631 xset -buildcheck -opengles" TESTS="enabled_test1 enabled_test2 ... enabled_testN
    632 -disabled_test1 ...disabled_testN" 0 kB [1]
    633 ```
    634 
    635 Alternately, you can use equery, which will list tests with the USE_EXPAND
    636 prefix:
    637 ```
    638 $ equery-$board uses ${ebuild_name}
    639 [ Legend : U - final flag setting for installation]
    640 [        : I - package is installed with flag     ]
    641 [ Colors : set, unset                             ]
    642  * Found these USE flags for chromeos-base/autotest-tests-9999:
    643  U I
    644  + + autotest                                    : <unknown>
    645  + + autotest                                    : <unknown>
    646  + + autox                                       : <unknown>
    647  + + buildcheck                                  : <unknown>
    648  + + hardened                                    : activate default security enhancements for toolchain (gcc, glibc, binutils)
    649  - - opengles                                    : <unknown>
    650  + + tests_enabled_test                     : <unknown>
    651  - - tests_disabled_test                      : <unknown>
    652 ```
    653 
    654 ### Q5: Im working on some test sources, how do I know which ebuilds to cros_workon start in order to properly propagate?
    655 
    656 You should workon and always cros_workon start all ebuilds that have files
    657 that you touched.  If youre interested in a particular file/directory, that
    658 is installed in `/build/$board/usr/local/autotest/` and would like know which
    659 package has provided that file, you can use equery:
    660 
    661 ```
    662 $ equery-$board belongs /build/${board}/usr/local/autotest/client/site_tests/foo_bar/foo_bar.py
    663  * Searching for <filename> ...
    664 chromeos-base/autotest-tests-9999 (<filename>)
    665 ```
    666 
    667 DONT forget to do equery-$board. Just equery will also work, only never
    668 return anything useful.
    669 
    670 As a rule of thumb, if you work on anything from the core autotest framework or
    671 shared libraries (anything besides
    672 {server,client}/{test,site_tests,deps,profilers,config}), it belongs to
    673 chromeos-base/autotest. Individual test case will each belong to a particular
    674 ebuild, see Q2.
    675 
    676 It is important to cros_workon start every ebuild involved.
    677 
    678 ### Q6: I created a test, added it into ebuild, emerged it, and Im getting access denied failures. What did I do wrong?
    679 
    680 Your tests `setup()` function (which runs on the host before being uploaded) is
    681 probably trying to write into the read-only intermediate location. See
    682 [explanation](#Building-tests).
    683 
    684