Skip to content

Latest commit

 

History

History
191 lines (137 loc) · 8.09 KB

README.md

File metadata and controls

191 lines (137 loc) · 8.09 KB

These integration tests verify the correctness and consistency of mapbox-gl-js and mapbox-gl-native rendering.

Organization

Tests are contained in a directory tree, generally organized by style specification property: background-color, line-width, etc., with a second level of directories below that for individual tests. For example, the test for specifying a literal circle-radius value lives in test/integration/render-tests/circle-radius/literal/.

Within a leaf directory is a style.json file (e.g. circle-radius/literal/style.json), which contains the minimal style needed for the given test case. The style can specify the map size, center, bearing, and pitch, and additional test metadata (e.g. output image dimensions).

The expected output for a given test case is in expected.png, e.g. circle-radius/literal/expected.png.

Supporting files -- glyphs, sprites, and tiles -- live in their own respective subdirectories at the top level. The test harness sets up the environment such that requests for these resources are directed to the correct location.

The contents of vector tile fixtures can be read using the vt2geojson tool (see below).

Running tests

To run the entire integration test suite (both render or query tests), from within the mapbox-gl-js directory run the command:

npm run test-suite

To run only the render/query tests:

npm run test-render

or

npm run test-query

To run only the expression tests:

npm run test-expressions

Running specific tests

To run a subset of tests or an individual test, you can pass a specific subdirectory to the test-render script. For example, to run all the tests for a given property, e.g. circle-radius:

$ npm run test-render tests=circle-radius
...
* passed circle-radius/antimeridian
* passed circle-radius/default
* passed circle-radius/function
* passed circle-radius/literal
* passed circle-radius/property-function
* passed circle-radius/zoom-and-property-function
6 passed (100.0%)
Results at: ./test/integration/render-tests/index.html
Done in 2.71s.

Or to run a single test:

$ npm run test-render tests=circle-radius/literal
...
* passed circle-radius/literal
1 passed (100.0%)
Results at: ./test/integration/render-tests/index.html
Done in 2.32s.

Viewing test results

During a test run, the test harness will use Mapbox GL JS to create an actual.png image from the given style.json, and will then use pixelmatch to compare that image to expected.png, generating a diff.png highlighting the mismatched pixels (if any) in red.

After the test(s) have run, you can view the results graphically by opening the index.html file generated by the harness:

open ./test/integration/render-tests/index.html

or

open ./test/integration/query-tests/index.html

Running tests in the browser

Render and query tests can be run in the browser. The server for serving up the test page and test fixtures starts when you run

npm run watch-query

or

npm run watch-render

Then open the following url in the browser of your choice to start running the tests.

http://localhost:7357/

Running specific tests

A filter can be specified by using the filter query param in the url. E.g, adding

?filter=circle-pitch

to the end of the url will only run the tests that contain circle-pitch in the name.

You can run a specific test by as follows

?filter=circle-radius/antimeridian

Enable ANGLE configuration on render tests

Some devices (e.g. M1 Macs) seem to run test with significantly less failures when forcing the ANGLE backend to use OpenGL.

To configure the ANGLE backend, you can set the --use-angle input value to USE_ANGLE in CLI like so:

USE_ANGLE={INPUT} npm run test-render

Accepted inputs for USE_ANGLE are metal, gl, vulkan, swiftshader, and gles. See chrome://flags/#use-angle for more information on the --use-angle flag.

Build Notifications

The terminal window can be very noisy with both the build and the test servers running in the same session. So the server uses platform notifications to inform when the build has finished. If this behaviour is annoying, it can be disabled by setting the following env-var

DISABLE_BUILD_NOTIFICATIONS=true

Writing new tests

Note: Expected results are always generated with the js implementation. This is merely for consistency and does not imply that in the event of a rendering discrepancy, the js implementation is always correct.

To add a new render test:

  1. Create a new directory test/integration/render-tests/<property-name>/<new-test-name>

  2. Create a new style.json file within that directory, specifying the map to load. Feel free to copy & modify one of the existing style.json files from the render-tests subdirectories. In this file, you can add additional information to describe the test and expected outcomes using the description metadata field.

  3. Generate an expected.png image from the given style by running the new test with the UPDATE flag enabled:

    $ UPDATE=1 npm run test-render tests=<property-name>/<new-test-name>
    

    The test will appear to fail, but you'll now see a new expected.png in the test directory.

  4. Manually inspect expected.png to verify it looks as expected, and optionally run the test again without the update flag (npm run test-render <property-name>/<new-test-name>) to watch it pass (enjoy that dopamine kick!)

  5. Commit the new style.json and expected.png 🚀

Tests on CircleCI

Every pushed commit triggers test runs on the CircleCI server. These catch regressions and prevent platform-specific bugs.

Render tests often fail due to minor antialiasing differences between platforms. In these cases, you can add an "allowed" property under "test" in the test's style.json to tell the test runner the degree of difference that is acceptable. This is the fraction of pixels that can differ between expected.png and actual.png, ignoring some antialiasing, that will still allow the test to pass.

How much to adjust the "allowed" is acceptable depends on the test, but alloweds >= .01 are usually much too high. Especially with larger test images, alloweds should generally be negligable, since a too-high allowed will fail to catch regressions and significant rendering differences suggest a bug.

Larger alloweds are acceptable for testing debug features that will not be directly used by customers.

Ignores

If a test fails on a run with too large a difference to adjust the "allowed," it can be added to the corresponding ignore file for the browser or operating system.

Ignores include tests under "todo" and "skip". "todo" tests show up in test results but do not trigger a failing run. Most tests failing on one platform should be marked as "ignore." This allows us to notice if the tests start passing.

Tests under "skip" will not run at all. Tests should be skipped if they trigger crashes or if they are flaky (to prevent falsely concluding that the test is a non-issue).

Ignored tests should link to an issue explaining the reason for ignoring the test.

Reading Vector Tile Fixtures

Install vt2geojson, a command line utility which turns vector tiles into geojson, and harp, a simple file server.

npm install -g vt2geojson harp

Start a static file server

harp server .

Read the contents of an entire vector tile

vt2geojson -z 14 -y 8803 -x 5374 http://localhost:9000/tiles/14-8803-5374.mvt

Read the contents of a particular layer in a vector tile

vt2geojson --layer poi_label -z 14 -y 8803 -x 5374 http://localhost:9000/tiles/14-8803-5374.mvt