chore(telemetry): store internal event context in middleware
Note: This is a relatively high priority item for #491 (comment 2006453847).
What does this merge request do and why?
This MR does a groundwork for feat(tracking): add internal events framework t... (!902 - merged) and Track a product usage event with the new standa... (#491 - closed) by storing the internal event context in middleware.
These stored contexts are later by fetched by InternalEventTracker to send a complete event to snowplow (See this MR for overall implementation). The current event context can be fetched in the following way:
from ai_gateway.internal_events import current_event_context, EventContext
context: EventContext = current_event_context.get()
How to set up and validate locally
Test request:
curl -X 'POST' \
'http://0.0.0.0:5052/v1/chat/agent' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-H 'X-Gitlab-Realm: saas' \
-H 'X-Gitlab-Instance-Id: 123' \
-H 'X-Gitlab-Host-Name: example.gitlab.com' \
-H 'X-Gitlab-Version: 17.0' \
-H 'X-Gitlab-Global-User-Id: 123abc' \
-d '{
"prompt_components": [
{
"type": "string",
"metadata": {
"source": "string",
"version": "string"
},
"payload": {
"content": "\n\nHuman: Hi, How are you?\n\nAssistant:",
"provider": "anthropic",
"model": "claude-2.1",
"params": {
"stop_sequences": [
"\n\nHuman",
"Observation:"
],
"temperature": 0.2,
"max_tokens_to_sample": 2048
},
"model_endpoint": "string",
"model_api_key": "string"
}
}
],
"stream": false
}'
Confirm that the context attributes are set properly:
context: environment='development' source='curl/7.81.0' realm='saas' instance_id='123' host_name='example.gitlab.com' instance_version='17.0' global_user_id='123abc'
patch
diff --git a/ai_gateway/api/v1/chat/agent.py b/ai_gateway/api/v1/chat/agent.py
index 3145c5e7..9c2d92c4 100644
--- a/ai_gateway/api/v1/chat/agent.py
+++ b/ai_gateway/api/v1/chat/agent.py
@@ -28,6 +28,7 @@ from ai_gateway.models import (
)
from ai_gateway.models.base_text import TextGenModelChunk, TextGenModelOutput
from ai_gateway.tracking import log_exception
+from ai_gateway.internal_events import current_event_context, EventContext
__all__ = [
"router",
@@ -74,6 +75,9 @@ async def chat(
),
litellm_factory: Factory = Depends(get_chat_litellm_factory_provider),
):
+ context: EventContext = current_event_context.get()
+ print(f"context: {context}")
+
prompt_component = chat_request.prompt_components[0]
payload = prompt_component.payload
Numbered steps to set up and validate the change are strongly suggested.
Merge request checklist
-
Tests added for new functionality. If not, please raise an issue to follow up. -
Documentation added/updated, if needed.
Edited by Shinya Maeda