Refactor to remove options from base_prompt
What does this MR do and why?
Related to #417534
This MR refactors the base_prompt
methods of AiDependent
classes (the ones that include the concern) to only pass a prompt text
and makes each prompt provider to format and insert the inputs intended for its request class.
Why?
-
Returning a text value is less fault-prone than returning a hash (see !127316 (comment 1484801180).)
-
As it currently stands, the options that are intended for the underlying AI provider are better handled and inserted at the prompt class level.
Take the zeroshot executor for example. The executor has no business knowing which set of options should be used for each provider:
# https://gitlab.com/gitlab-org/gitlab/-/blob/master/ee/lib/gitlab/llm/chain/agents/zero_shot/executor.rb#L78
# This method should not be memoized because the input variables change over time
def base_prompt
{
prompt: Utils::Prompt.no_role_text(PROMPT_TEMPLATE, options),
agent_scratchpad: options[:agent_scratchpad],
options: {} # These are options intended for the underlying provider like Anthropic. For example, `temperature: 0.1` could be included in the hash.
}
end
By making the changes in this MR, base_prompt
returns a text. The prompt class (Anthropic) then grabs the text and injects the appropriate options using the class method from the request class:
def self.prompt(options)
text = <<~PROMPT
#{ROLE_NAMES[Llm::Cache::ROLE_USER]}: #{base_prompt(options)}
PROMPT
history = truncated_conversation(options[:conversation], Requests::Anthropic::PROMPT_SIZE - text.size)
text = [history, text].join if history.present?
Requests::Anthropic.prompt(text)
end
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.