Run tests for new/modified files first: match file name
Problem to solve
Programmers want to get feedback quickly about how their changes will work. The feedback loop of unit tests today is too slow unless they manually re-arrange the tests to be run early in the pipeline or even in a separate test job before the rest of the unit tests. This isn't efficient so they just let the tests run normally which can take awhile and you lose context of the change making the fixes slower than idea.
Following the POC (#198550 (closed)) that establishes/established this is a "good thing" for developers who have to wait less time to find failures in tests we want to build this feature into GitLab.
Intended users
Further details
We know that TDD improves quality and that continuous testing is a predictor of Continuous Delivery and further Organizational performance (see DORA State of Devops 2019 p. 60).
To better enable delivery teams to start using TDD we want to shorten the feedback cycle of how those new tests perform when a change is checked into the repo. By running just tests for newly added or changed code we can do that and let the team start their review more quickly, even while the pipeline is still running, and with the confidence the code is doing what it should.
Proposal
Doing this as a one off script seems to have worked out. This issue will codify this by building it into GitLab that when a file is created/changed for Go, Ruby, Java, etc. (@erushton would be helpful to nail down this list with you) the corresponding test file that matches by name/directory structure will be found and pulled into a dynamic job which is run as a pre-test or verification stage.
Another follow-on issue will allow for some configuration to create a route map of sorts to map to test files and establish how to map code to tests. For now we will follow convention that names must "match" to use the feature and enable an MVC.
Permissions and Security
Documentation
- Documentation of how to setup directory and file naming convention to use this feature.
Availability & Testing
What does success look like, and how can we measure that?
A successful outcome will be an increase of pipelines using the verification stage by 50% among Starter or above users.
What is the type of buyer?
The buyer for this is a team lead/director who wants their team to create tests for the code they are checking into source. This should be built for the starter tier.