Optimize services_usage counters using batch counting
The counters in #210007 (closed) time out on gitlab.com. This MR the removes the .group
SQL query and enables the batch counting.
- Spec coverage for the current tests is good
After
Batch counter is able to count each service easily
[ gprd ] production> start=Time.now; [Gitlab::UsageData.count(::Service.active.where(template: false, type: "GithubService")), Time.now - start]
=> [70060, 15.982688295]
[ gprd ] production> start=Time.now; [Gitlab::UsageData.count(::Service.active.where(template: false, type: "AlertsService")), Time.now - start]
=> [29, 9.558777378]
Before
The original query takes over 1 minute, far beyond the 15second
timeouts
SELECT COUNT(*) AS count_all, "services"."type" AS services_type FROM "services" WHERE "services"."active" = TRUE AND "services"."template" = FALSE AND "services"."type" != 'JiraService' GROUP BY "services"."type"
# Time: 73990.437 ms (01:13.990)
Edited by Alper Akgun