chore(deps): update dependency litellm to v1.51.0
This MR contains the following updates:
Package | Type | Update | Change |
---|---|---|---|
litellm | dependencies | minor |
1.50.0 -> 1.51.0
|
⚠ WarningSome dependencies could not be looked up. Check the warning logs for more information.
View the Renovate pipeline for this MR
Release Notes
BerriAI/litellm (litellm)
v1.51.0
What's Changed
- perf: remove 'always_read_redis' - adding +830ms on each llm call by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6414
- feat(litellm_logging.py): refactor standard_logging_payload function … by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6388
- LiteLLM Minor Fixes & Improvements (10/23/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6407
- allow configuring httpx hooks for AsyncHTTPHandler (#6290) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6415
- feat(proxy_server.py): check if views exist on proxy server startup +… by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6360
- feat(litellm_pre_call_utils.py): support 'add_user_information_to_llm… by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6390
- (admin ui) - show created_at for virtual keys by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6429
- (feat) track created_at, updated_at for virtual keys by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6428
- Code cov - add checks for patch and overall repo by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6436
- (admin ui / auth fix) Allow internal user to call /key/{token}/regenerate by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6430
- LiteLLM Minor Fixes & Improvements (10/24/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6421
- (proxy audit logs) fix serialization error on audit logs by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6433
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.50.4...v1.51.0
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.51.0
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
230.0 | 256.2776533033099 | 6.163517714105049 | 0.0 | 1843 | 0 | 210.4747610000004 | 1438.3136239999885 |
Aggregated | Passed |
230.0 | 256.2776533033099 | 6.163517714105049 | 0.0 | 1843 | 0 | 210.4747610000004 | 1438.3136239999885 |
v1.50.4
What's Changed
- (feat) Arize - Allow using Arize HTTP endpoint by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6364
- LiteLLM Minor Fixes & Improvements (10/22/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6384
- build(deps): bump http-proxy-middleware from 2.0.6 to 2.0.7 in /docs/my-website by @dependabot in https://github.com/BerriAI/litellm/pull/6395
- (docs + testing) Correctly document the timeout value used by litellm proxy is 6000 seconds + add to best practices for prod by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6339
- (refactor) move convert dict to model response to llm_response_utils/ by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6393
- (refactor) litellm.Router client initialization utils by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6394
- (fix) Langfuse key based logging by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6372
- Revert "(refactor) litellm.Router client initialization utils " by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6403
- (fix) using /completions with
echo
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6401 - (refactor) prometheus async_log_success_event to be under 100 LOC by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6416
- (refactor) router - use static methods for client init utils by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6420
- (code cleanup) remove unused and undocumented logging integrations - litedebugger, berrispend by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6406
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.50.2...v1.50.4
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.50.4
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Failed |
280.0 | 312.6482922531862 | 6.037218908394318 | 0.0 | 1805 | 0 | 231.8999450000092 | 2847.2051709999846 |
Aggregated | Failed |
280.0 | 312.6482922531862 | 6.037218908394318 | 0.0 | 1805 | 0 | 231.8999450000092 | 2847.2051709999846 |
v1.50.2
What's Changed
- (fix) get_response_headers for Azure OpenAI by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6344
- fix(litellm-helm): correctly use dbReadyImage and dbReadyTag values by @Hexoplon in https://github.com/BerriAI/litellm/pull/6336
- fix(proxy_server.py): add 'admin' user to db by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6223
- refactor(redis_cache.py): use a default cache value when writing to r… by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6358
- LiteLLM Minor Fixes & Improvements (10/21/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6352
- Refactor: apply early return by @Haknt in https://github.com/BerriAI/litellm/pull/6369
- (refactor) remove berrispendLogger - unused logging integration by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6363
- (fix) standard logging metadata + add unit testing by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6366
- Revert "(fix) standard logging metadata + add unit testing " by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6381
- Add new Claude 3.5 sonnet model card by @lowjiansheng in https://github.com/BerriAI/litellm/pull/6378
- Add claude 3 5 sonnet
2024102
models for all provides by @Manouchehri in https://github.com/BerriAI/litellm/pull/6380
New Contributors
- @Hexoplon made their first contribution in https://github.com/BerriAI/litellm/pull/6336
- @Haknt made their first contribution in https://github.com/BerriAI/litellm/pull/6369
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.50.1...v1.50.2
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.50.2
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
240.0 | 271.2844291307854 | 6.2111756488034775 | 0.0 | 1858 | 0 | 210.62568199999987 | 3226.4373430000433 |
Aggregated | Passed |
240.0 | 271.2844291307854 | 6.2111756488034775 | 0.0 | 1858 | 0 | 210.62568199999987 | 3226.4373430000433 |
v1.50.1
What's Changed
- doc - using gpt-4o-audio-preview by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6326
- (refactor)
get_cache_key
to be under 100 LOC function by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6327 - Litellm openai audio streaming by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6325
- LiteLLM Minor Fixes & Improvements (10/18/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6320
- LiteLLM Minor Fixes & Improvements (10/19/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6331
- fix - unhandled jsonDecodeError in
convert_to_model_response_object
by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6338 - (testing) add test coverage for init custom logger class by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6341
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.50.0...v1.50.1
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.50.1
🎉
Don't want to maintain your internal proxy? get in touch Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed |
260.0 | 288.9506471715694 | 6.1364168904754175 | 0.0 | 1836 | 0 | 231.4412910000101 | 1825.7555540000112 |
Aggregated | Passed |
260.0 | 288.9506471715694 | 6.1364168904754175 | 0.0 | 1836 | 0 | 231.4412910000101 | 1825.7555540000112 |
Configuration
-
If you want to rebase/retry this MR, check this box
This MR has been generated by Renovate Bot.