Conditionally inject blob viewed by user into zeroshot prompt
What does this MR do and why?
User should be able to ask about the code currently viewed and to fulfill the user question, we need to identify the code and inject it into the zeroshot executor's prompt so the LLMs can directly answer them.
- Identify the code being viewed: parse the current project and blob viewed using the Referrer header.
- Inject the code and the prompt telling the LLM to directly explain the code.
The change is gated with a new feature flag :explain_current_blob
which is meant to be short-lived.
How does the injection work?
- The Referer header included in a GraphQL request is passed to CompletionWorker as part of
options
. -
CompletionWorker
calls::Llm::ExtraResourceFinder
which can extract Blob related references and then find+authorize the blob and its project.-
::Llm::ExtraResourceFinder
can only deal with Blob for now. As we potentially add support for more resource types, appropriate abstractions need to be created.
-
- If a readable blob is found, the blob would be available in
context.extra_resource
and the zeroshot executor and other chains can make use the blob's data.
Screenshots or screen recordings
Before | After | After 2 | After 3 |
---|---|---|---|
How to set up and validate locally
Enable the feature flag to test the functionality.
Feature.enable(:explain_current_blob)
-
Check that the LLM hallucinates and makes up some code when asked to explain the code as shown in the before screenshot without the feature flag enabled (the behavior should be the same as in the main branch.)
-
Enable the FF and test the prompt again.
-
Check that the code inlined in a prompt can be explained.
MR acceptance checklist
This checklist encourages us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.
-
I have evaluated the MR acceptance checklist for this MR.
Related to #419656 (closed)