Cache AI requests and responses
What does this MR do and why?
Related to https://gitlab.com/gitlab-org/gitlab/-/issues/410521
Cache AI requests and responses
- Redis stream is used to cache AI requests/responses
- because streams can't be updated, requests and responses are cached separately
- response can be found by request's ID (which is same for both request and response)
- when caching is called for responses, also
response_body
is stored. It's not clear at this point what data we will need to store for requests vs responses (we can extend this later), so at this point presence ofresponse_body
is the only difference betweenrequests
andresponses
. - This cache is not used yet (I plan to do this in a separate MR as this would be too big change)
Screenshots or screen recordings
Screenshots are required for UI changes, and strongly recommended for all other merge requests.
How to set up and validate locally
Numbered steps to set up and validate the change are strongly suggested.
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.
Edited by Jan Provaznik