Draft: Add benchmarking for autocomplete endpoint
What does this MR do and why?
Why
Since we are introducing advanced search for users in !102724 (merged), we would like to leverage that to improve the performance of other user-based endpoints:
-
autocomplete/users.json
which powers the Assignee and Reviewers on MRs and Issues -
autocomplete_sources/members
which lists users on notes for actions such as/assign
,/cc
, etc.
This MR is to achieve two things: (1) identify what causes slowness on the endpoint and (2) have a benchmark of the performance before making any changes so that we measure improvements.
Existing metrics
We already track the duration of both endpoints: json.duration
and json.db_duration
which will be useful in determining impact.
What
Adds benchmarking to the ParticipantsService
to measure the full duration of getting all participants and the duration of each sub-component. The ParticipantsService
is called from the autocomplete_sources/members
endpoint.
Feature flag the benchmarking so that we can enable it for a percentage of time instead of on every request.
Related to #366324 (closed)