LAVA is an automated validation architecture primarily aimed at testing deployments of systems based around the Linux kernel on ARM devices, specifically ARMv7 and later. The current range of boards (device types) supported by this LAVA instance can be seen on the scheduler status page which includes details of how many boards of each type are available for tests and currently running jobs.
Each test can provide a result, including a measurement and units or a as
a pass/fail/skip, with
results being bundled into set from each test job.
Tests can be as simple as using
ping with a known address to ensure that the kernel has raised the
network interface correctly, to a single result obtained by downloading, compiling and executing third party test
Tests can be run on a single device or combined across multiple devices and some devices can use dedicated test hardware like lmp with jobs selecting those devices using tags. LAVA does not dictate which tests can be run, so to get an idea of what tests other people have been running in LAVA, take a look at the Dashboard. Bundles contain details of the environment in which the test was run as well as the test results from completed jobs. Bundles are collected into bundle streams, some of which are publicly visible. Each bundle stream provides access to bundles of test results and each bundle can be inspected exported or downloaded for further analysis. Exports are available as CSV or JSON. A variety of queries are supported over XMLRPC.
Dashboard filters allow results in bundles to be compared by matching criteria about the device under test, the type of test being run or most other elements of a test job. Filters then provide the basis for image reports which can provide detailed graphs of results over time, with links back to individual tests, output from the LAVA log file and the original test job definition.