Introduce ability to fake out code suggest models
Since these require connectivity to cloud services and other complex setup, when testing benign changes to the API gateway itself it can be useful to just run fakes instead.
To accomplish this, we introduced fake implementations for the two text models we use and which return a canned response. They are injected via a Selector
that switches on an environment variable. To enable it, set USE_FAKE_MODELS=True
. Obtaining suggestions then serves:
curl -v -H'Content-Type: application/json' -d'{
"prompt_version": 1,
"current_file": {
"file_name": "test.py",
"content_above_cursor": "def is_even(n: int) ->",
"content_below_cursor": ""
}, "project_id": 1, "project_path": "path"
}' localhost:5001/v2/completions
{"id":"id","model":"codegen","object":"text_completion","created":1686320252,"choices":[{"text":"fake code suggestion","index":0,"finish_reason":"length"}]}
In the future, we could expand this to produce different responses from fixtures too, which may be useful for end-to-end testing.
Edited by Matthias Käppler