Segregate `unit test` job into a separate `integration test` job
What does this MR do?
This MR:
- adds
integration
build directive to integration test files, and!integration
build directive to unit test files; - adds
check_test_directives
makefile target allowing the developer and CI (throughlint
dependency on it) to check that we didn't miss any build directive in a test file; - segregates integration tests into separate jobs, so we can obtain a separate integration tests code coverage report. The job changes are as follows:
-
prebuild
stage:-
test definitions
->unit test definitions
&integration test definitions
;
-
-
coverage
stage:-
unit test
->unit test
&integration test
; -
unit test with race
->unit test with race
&integration test with race
; -
windows 1809 tests
->windows 1809 unit tests
&windows 1809 integration tests
; -
windows 1903 tests
->windows 1903 integration tests
; -
windows 1909 tests
->windows 1909 integration tests
; -
windows 2004 tests
->windows 2004 integration tests
; -
integration_k8s
now only run integration tests;
-
-
coverage
stage:-
test coverage report
->test coverage report
(unit + integration) &integration test coverage report
(integration-only); -
check windows 1809 panic test failures
->check windows 1809 panic unit test failures
&check windows 1809 panic integration test failures
; -
check windows 1809 test failures
->check windows 1809 unit test failures
&check windows 1809 integration test failures
; -
check windows 1903 test failures
->check windows 1903 integration test failures
; -
check windows 1909 test failures
->check windows 1909 integration test failures
; -
check windows 2004 test failures
->check windows 2004 integration test failures
;
-
-
- adds documentation to
docs/development/README.md
regarding the new procedure; - removes boilerplate calls to
SkipIntegrationTests
indocker_command_integration_test.go
by checking for it once inTestMain
.
Why was this MR needed?
Currently, it's hard to perform a simple task such as generating a code-coverage report for just the integration tests. This could help identify areas which are not adequately covered by integration tests during MR reviews of sensitive areas.
Also, since we split out our unit test
jobs indiscriminately, some jobs end up with a disproportionate amount of integration tests, causing our parallel test suite to run slower than needed.
The typical pipeline currently builds in 1h30m. In this branch, it usually takes ~52m.
Tasks
-
add an integration
tag (following SoundCloud's Go best practices here); -
add !integration
build directive to unit test files. This will generate a lot of diffs; -
split out the unit test
job into anintegration test
job that will only run integration tests. Theunit test
job will ignore integration tests. Downstream jobs should be cloned as appropriate; -
structure test artifacts as {.cover,.testoutput}/{unit,integration}/**
instead of{.cover,.testoutput}/**
so we can mix and match when we want to generate a report or check for errors; -
adapt test coverage report
job to generate a Cobertura report from the merged unit/integration test results; -
adapt documentation to mention the new way of splitting tests and how to run unit tests and/or integration tests; -
Optional: add some sort of validation/linting so that we don't push test files that are not correctly classified.
What's the best way to test this MR?
We should compare the before/after state of the jUnit report (main branch against this branch). We can use the following script to fetch and massage the report into a sorted text list of executed tests that we can paste into a file comparison tool:
# Download and sort jUnit test report, we can use to compare old and new report for missing tests
curl https://gitlab.com/api/v4/projects/250833/pipelines/${PIPELINE_ID}/test_report | jq -r '.test_suites[].test_cases[].name' | sort | pbcopy
- In CI:
- Check that inline code coverage in this MR hasn't dropped (the results of unit tests and integration tests should be merged);
- Check that you can access an integration test-only code coverage HTML report (e.g. https://gitlab.com/gitlab-org/gitlab-runner/-/jobs/1064403603/artifacts/file/out/coverage/coverprofile.regular.html);
- Check that integration test coverage seems to make sense;
- Check that jobs have the correct artifacts (e.g.
unit test definitions
contains atestsdefinitions.txt
containing all unit tests,integration test definitions
contains atestsdefinitions.txt
containing all integration tests, etc.)
- Local environment:
- Check that
make test
works as expected (runs integration tests); - Check that
make unit-test
works as expected (runs unit tests);
- Check that
To check that all unit/integration test files have the correct build directive
A new check_test_directives
target was added to the Makefile (on which lint
depends) so that in the future we can automatically catch files that weren't
adapted with the correct build directive:
make check_test_directives
To test it, you can try changing/removing the build directives of a test file and running the command.
What are the relevant issue numbers?
Closes #27608 (closed)