Resolve "API support for Lead time for MRs to be merged"
What does this MR do?
- Related to #291746 (closed)
-
Builds on changes from !51938 (merged)Copies some changes from !51938 (merged) (hopefully the merge conflicts won't be too bad)
Adds a REST API for DORA4 Lead Time at the Project scope. The API should return the average time for MRs to be merged.
Notes
My thoughts on the last two items in the Todo list, below. Right now I'm calculating the lead time in ruby like so:
def average_lead_time(grouping)
size = grouping.count
return 0 if size == 0
total = grouping.inject(0) do |sum, mr|
sum + (mr.merged_at - mr.merge_request.created_at).to_i / 1.minute
end
total / size
end
We need to find a way to do it in-database. There is an existing total_time_to_merge
in the MergeRequest::Metrics
model but it doesn't quite work. But something like that it what we want to do.
Also for the pagination we need to push that down into the database as well if we want the pagination to be performant at the database level. I believe that Value Stream Analytics does this. @ahegyi might be a good person to contact about this!
Todo
-
Tests forUse the existingMergeRequest.merged_between
merged_before
andmerged_after
. -
Leverage existingDon't even need to use a finder for this, honestly.MergeRequestsFinder
. -
Add screenshots for the new APIs as per @ogolowinski's request -
Rebase on !51938 (merged) once it is merged. Which requires that we first get !52288 (closed) merged too.Don't wait for rebase. -
Add spec tests for ee/app/services/analytics/merge_requests/lead_time/aggregate_service.rb
-
Completely refactor how the aggregate service works so that we can paginate in such a way that we make multiple SQL requests for each page, as per: !52243 (comment 500874905) -
Find some way to leverage total_time_to_merge
or something like it as per !52243 (comment 499936395)
Screenshot
Database
Raw SQL:
SELECT
"merge_request_metrics".*
FROM
"merge_request_metrics"
WHERE
"merge_request_metrics"."target_project_id" = :project_id
AND "merge_request_metrics"."merged_at" >= :start_date
AND "merge_request_metrics"."merged_at" < :end_date
ORDER BY
merged_at
Query Plan:
Sort (cost=18.09..18.12 rows=13 width=140) (actual time=5.483..5.485 rows=0 loops=1)
Sort Key: merge_request_metrics.merged_at
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=3 read=4
I/O Timings: read=5.344
-> Index Scan using index_mr_metrics_on_target_project_id_merged_at_nulls_last on public.merge_request_metrics (cost=0.56..17.85 rows=13 width=140) (actual time=5.384..5.384 rows=0 loops=1)
Index Cond: ((merge_request_metrics.target_project_id = 1) AND (merge_request_metrics.merged_at >= '2020-02-01 00:00:00'::timestamp without time zone) AND (merge_request_metrics.merged_at < '2020-04-01 00:00:00'::timestamp without time zone))
Buffers: shared read=4
I/O Timings: read=5.344
Summary:
Time: 5.758 ms
- planning: 0.227 ms
- execution: 5.531 ms
- I/O read: 5.344 ms
- I/O write: N/A
Shared buffers:
- hits: 3 (~24.00 KiB) from the buffer pool
- reads: 4 (~32.00 KiB) from the OS file cache, including disk I/O
- dirtied: 0
- writes: 0
Recommendations:
✅ Looks good
Permalink: https://postgres.ai/console/gitlab/gitlab-production-tunnel/sessions/1895/commands/6428
Does this MR meet the acceptance criteria?
Conformity
-
Changelog entry -
Documentation (if required) -
Code review guidelines -
Merge request performance guidelines -
Style guides -
Database guides -
Separation of EE specific content
Availability and Testing
-
Review and add/update tests for this feature/bug. Consider all test levels. See the Test Planning Process.