This site has been retired. For up to date information, see handbook.gnome.org or gitlab.gnome.org.


[Home] [TitleIndex] [WordIndex

Goal

The primary goal of the GNOME Hardware Testing project is to track the performance of GNOME on a range of hardware, in order to validate changes that are made to improve performance, and to catch performance regressions quicky when they happen.

The project is not meant for testing hardware compatibility. Hardware compatibility primarily depends on lower levels of the stack, such as the Linux kernel and Mesa GL drivers, rather than the software that GNOME maintains, and testing hardware compatiblity, especially when it comes to such things as wireless cards, requires a much bigger set of hardware and complicated test harness than we can easily provide.

Architecture

Archicture Diagram

The basic requirement we have for the testing process is that the machines running the tests will never get stuck. No matter how hard the tested software locks the machine, we want to be able to always update to the next version and test that. For that reason, the machines running the tests do not download software and schedule tests themselves. Instead, in each location where there are test machines, a controller system handles these tasks.

The controller system controls power to the test system (via USB or serial power control), and the test system is configured in the BIOS to boot from the network when power is restored. Updating to a new version is handled by booting the test system from the network with a specialized operating system image that the controller exports. To run tests, the network boot configuration is changed so what is loaded from the network is just a boot-loader that triggers a local boot.

Overview of testing process

  1. A new tree is built on build.gnome.org
  2. A controller machine notices the new build and downloads it to a local repository.
  3. For each target, the controller:
    1. Boots the machine hosting the target to update to the latest version of the tree
    2. Boots the machine again to run tests
    3. Collects the test results that are logged to the network
    4. Uploads those test results to perf.gnome.org

Measuring Metrics

The end result of running the test process is a set of metrics. A metric is simply a number (with units) that represents some aspect of performance of the user interface. For example, how long it takes to draw a frame of an animation, or the amount of memory that is used when an application is started. In detail, an example is:

name: gedit.startTime
units: us
description: time from launching gedit until when the main window is fully drawn and displayed to the user.

One use of the units is to allow the interface of perf.gnome.org to display the results in a more human-readable fashion. If values of gedit.startTime range from 800,000 to 1,100,000 microseconds, it can instead display the values as 0.8 seconds to 1.1 seconds.

The actual measurement process is driven by a Javascript script that runs inside the gnome-shell process. The script first performs a sequence of action, recording events into an event log. It then analyzes the recorded events an extracts metrics from then, which are sent back to the controller machine, and from there to perf.gnome.org, to be recorded and displayed.

Further reading


2024-10-23 11:37