Execute Duo Chat explain code tool via agents
What does this MR do and why?
This MR is an attempt to execute Duo Chat Explain Code tool via AI Gateway Agents in order to move prompts from Rails to AI Gateway.
It calls the endpoint defined in this MR: feat(agents): add prompts for explain code tool (gitlab-org/modelops/applied-ml/code-suggestions/ai-assist!1132 - merged)
Related MR: Adding strategy for migrating Duo Chat tools to... (gitlab-com/content-sites/handbook!7380 - merged)
Related issue: Migrate Duo Chat Tools: ExplainCode (#475050 - closed)
Motivation
The idea is to move prompt generation logic from Rails to AI Gateway: Adding strategy for migrating Duo Chat tools to... (gitlab-com/content-sites/handbook!7380 - merged).
Currently
Explain code tool is executed (via Gitlab::Llm::Chain::Tools::ExplainCode::Executor
) with the following arguments
file_name: 'test.py',
selected_text: 'selected text',
content_above_cursor: 'code above',
content_below_cursor: 'code below'
We pass those arguments to a ::Gitlab::Llm::Chain::Tools::ExplainCode::Prompts::... to build a prompt and then send this prompt to AI Gateway (/v1/chat/agent
endpoint), then the prompt is propagated to LLM and the response is sent back to Rails.
In this MR
Instead of building the prompt in Rails, we pass the options directly to AI Gateway (/v1/chat/agents/explain_code
endpoint).
The prompt is generated on AI Gateway using the defined templates: feat(agents): add prompts for explain code tool (gitlab-org/modelops/applied-ml/code-suggestions/ai-assist!1132 - merged).
The prompt is sent to LLM and the response is sent back to Rails.
Test
- Checkout
id-explain-code-via-agents
on AI Gateway - Enable
prompt_migration_explain_code
-
/explain some code
in Duo Chat
In AI Gateway, POST v1/agents/chat/explain_code
is called with all the params necessary to build and execute the prompt.
Migrating other tools
- Define
prompt_migration_#{unit_primitive}
feature flag - Include
UseAiGatewayAgentPrompt
to the executor of the tool- If
def unit_primitive
method is not defined in the executor it will benil
when the feature flag is disabled and if it's enabled - If it's defined, it will always contain the defined value
- It's done to cover this logic
- If
use_ai_gateway_agent_prompt
is enabled - agent is called - If it's disabled, unit primitive is checked: if unit primitive is present
#{BASE_ENDPOINT}/#{unit_primitive}
is called; otherwise,ENDPOINT
is called
- If
- If
- Add
it_behaves_like 'uses ai gateway agent prompt'
to the executor's spec