Fixed handling of the messages from cache
What does this MR do and why?
Fixed handling of the messages from cache
- If a message has an error, we show the generic error message
- If assistant's message has content as one
string (
JSON.parse
fails oncontent
), we send the whole message to ADD_TANUKI_MESSAGE
Screenshots or screen recordings
Before | After |
---|---|
chat-cache-errors | chat-cache-errors-fixed |
How to set up and validate locally
- Follow the instructions to enable the AI features in your local GDK
- Follow the instructions on setting up the GitLab chat locally
- Enable the
:super_sidebar_nav
feature flag (Feature.enable(:anthropic_experimentation)
in your rails console - Enable the new super sidebar in your settings via the web interface:
- In your Rails console repopulate the messages cache with:
user = User.first
user_cache = Gitlab::Llm::Cache.new(user)
user_cache.add({ request_id: '123', role: Gitlab::Llm::Cache::ROLE_USER, content: 'Foo Bar' })
user_cache.add({ request_id: '123', role: Gitlab::Llm::Cache::ROLE_ASSISTANT, content: 'Assistant content', errors:['this is Error'] })
user_cache.add({ request_id: '123', role: Gitlab::Llm::Cache::ROLE_USER, content: 'Foo Bar 2' })
user_cache.add({ request_id: '123', role: Gitlab::Llm::Cache::ROLE_ASSISTANT, content: '', errors:['this is Error 2'] })
- Open the GitLab chat in Help -> Ask GitLab Chat
You should see the result as in After video with both pairs of messages, where assistant's responses show the generic error message
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.
Edited by Denys Mishunov