chore(deps): update dependency litellm to v1.49.2
This MR contains the following updates:
Package | Type | Update | Change |
---|---|---|---|
litellm | dependencies | minor |
1.48.17 -> 1.49.2
|
⚠ ️ WarningSome dependencies could not be looked up. Check the warning logs for more information.
View the Renovate pipeline for this MR
Release Notes
BerriAI/litellm (litellm)
v1.49.2
What's Changed
- Add literalai in the sidebar observability category by @willydouhard in https://github.com/BerriAI/litellm/pull/6163
- Search across docs, GitHub issues, and discussions by @yujonglee in https://github.com/BerriAI/litellm/pull/6160
- Feat: Add Langtrace integration by @alizenhom in https://github.com/BerriAI/litellm/pull/5341
- (fix) add azure/gpt-4o-2024-05-13 pricing by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6174
- LiteLLM Minor Fixes & Improvements (10/10/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6158
- (fix) batch_completion fails with bedrock due to extraneous [max_workers] key by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6176
- (fix) provider wildcard routing - when models specificed without provider prefix by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6173
New Contributors
- @alizenhom made their first contribution in https://github.com/BerriAI/litellm/pull/5341
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.49.1...v1.49.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.2
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
230.0 | 263.3150249465347 | 6.123838960549251 | 0.0 | 1833 | 0 | 205.50188100003197 | 2676.1843779999026 |
Aggregated | Passed |
230.0 | 263.3150249465347 | 6.123838960549251 | 0.0 | 1833 | 0 | 205.50188100003197 | 2676.1843779999026 |
v1.49.1
What's Changed
- (bug fix proxy ui) Default Team still rendered Even when disabled by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6134
- LiteLLM Minor Fixes & Improvements (10/09/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6139
- (feat) use regex pattern matching for wildcard routing by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6150
- [Feat] Observability integration - Opik by Comet by @jverre in https://github.com/BerriAI/litellm/pull/6062
- drop imghdr (#5736) by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6153
New Contributors
- @jverre made their first contribution in https://github.com/BerriAI/litellm/pull/6062
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.49.0...v1.49.1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.1
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
120.0 | 151.80524006444915 | 6.428687279464693 | 0.0 | 1924 | 0 | 106.09135400000014 | 2659.368719999975 |
Aggregated | Passed |
120.0 | 151.80524006444915 | 6.428687279464693 | 0.0 | 1924 | 0 | 106.09135400000014 | 2659.368719999975 |
v1.49.0
🚨 LiteLLM Proxy DB Schema updated - new table LiteLLM_OrganizationMembership
created
What's Changed
- (fix) clean up root repo - move entrypoint.sh and build_admin_ui to /docker by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6110
- (fix) Fix Groq pricing for llama3.1 by @kiriloman in https://github.com/BerriAI/litellm/pull/6114
- Fix: Literal AI llm completion logging by @willydouhard in https://github.com/BerriAI/litellm/pull/6096
- LiteLLM Minor Fixes & Improvements (10/08/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6119
- (feat proxy) [beta] add support for organization role based access controls by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6112
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.48.19...v1.49.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.0
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
89 | 103.77355238552961 | 6.468497554569813 | 0.0 | 1935 | 0 | 71.25175699997044 | 2212.39985699998 |
Aggregated | Passed |
89 | 103.77355238552961 | 6.468497554569813 | 0.0 | 1935 | 0 | 71.25175699997044 | 2212.39985699998 |
v1.48.19
What's Changed
- [docs] fix links due to broken list in enterprise features by @pradhyumna85 in https://github.com/BerriAI/litellm/pull/6103
- (docs) key based callbacks - add info on behavior by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6107
- (docs) add remaining litellm settings on configs.md doc by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6108
- (clean up) move docker files from root to
docker
folder by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6109 - LiteLLM Minor Fixes & Improvements (10/07/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6101
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.48.18...v1.48.19
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.19
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
170.0 | 197.18394088461483 | 6.170329569339292 | 0.0 | 1846 | 0 | 127.89470899997468 | 5195.441569000024 |
Aggregated | Passed |
170.0 | 197.18394088461483 | 6.170329569339292 | 0.0 | 1846 | 0 | 127.89470899997468 | 5195.441569000024 |
v1.48.18
What's Changed
- fix(utils.py): fix fix pydantic obj to schema creation for vertex en… by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6071
- Proxy: include customer budget in responses by @kvadros in https://github.com/BerriAI/litellm/pull/5977
- (proxy ui) - fix view user pagination by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6094
- (proxy ui sso flow) - fix invite user sso flow by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6093
- (bug fix) TTL not being set for embedding caching requests by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6095
- (feat proxy) add v2 maintained LiteLLM grafana dashboard by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6098
New Contributors
- @kvadros made their first contribution in https://github.com/BerriAI/litellm/pull/5977
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.48.17...v1.48.18
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.18
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.18
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
150.0 | 179.59820728602008 | 6.264331807633761 | 0.0 | 1874 | 0 | 123.93443999997089 | 1518.5208869999656 |
Aggregated | Passed |
150.0 | 179.59820728602008 | 6.264331807633761 | 0.0 | 1874 | 0 | 123.93443999997089 | 1518.5208869999656 |
Configuration
-
If you want to rebase/retry this MR, check this box
This MR has been generated by Renovate Bot.