Skip to content

chore(deps): update dependency litellm to v1.49.2

renovate requested to merge renovate/litellm-1.x-lockfile into main

This MR contains the following updates:

Package Type Update Change
litellm dependencies minor 1.48.17 -> 1.49.2

Warning

Some dependencies could not be looked up. Check the warning logs for more information.

View the Renovate pipeline for this MR


Release Notes

BerriAI/litellm (litellm)

v1.49.2

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.49.1...v1.49.2

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.2
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 230.0 263.3150249465347 6.123838960549251 0.0 1833 0 205.50188100003197 2676.1843779999026
Aggregated Passed 230.0 263.3150249465347 6.123838960549251 0.0 1833 0 205.50188100003197 2676.1843779999026

v1.49.1

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.49.0...v1.49.1

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.1
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 120.0 151.80524006444915 6.428687279464693 0.0 1924 0 106.09135400000014 2659.368719999975
Aggregated Passed 120.0 151.80524006444915 6.428687279464693 0.0 1924 0 106.09135400000014 2659.368719999975

v1.49.0

Compare Source

🚨 LiteLLM Proxy DB Schema updated - new table LiteLLM_OrganizationMembership created

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.48.19...v1.49.0

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.49.0
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 89 103.77355238552961 6.468497554569813 0.0 1935 0 71.25175699997044 2212.39985699998
Aggregated Passed 89 103.77355238552961 6.468497554569813 0.0 1935 0 71.25175699997044 2212.39985699998

v1.48.19

Compare Source

What's Changed

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.48.18...v1.48.19

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.19
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 170.0 197.18394088461483 6.170329569339292 0.0 1846 0 127.89470899997468 5195.441569000024
Aggregated Passed 170.0 197.18394088461483 6.170329569339292 0.0 1846 0 127.89470899997468 5195.441569000024

v1.48.18

Compare Source

What's Changed

New Contributors

Full Changelog: https://github.com/BerriAI/litellm/compare/v1.48.17...v1.48.18

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.18
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Docker Run LiteLLM Proxy

docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.48.18
Don't want to maintain your internal proxy? get in touch 🎉

Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat

Load Test LiteLLM Proxy Results

Name Status Median Response Time (ms) Average Response Time (ms) Requests/s Failures/s Request Count Failure Count Min Response Time (ms) Max Response Time (ms)
/chat/completions Passed 150.0 179.59820728602008 6.264331807633761 0.0 1874 0 123.93443999997089 1518.5208869999656
Aggregated Passed 150.0 179.59820728602008 6.264331807633761 0.0 1874 0 123.93443999997089 1518.5208869999656

Configuration

📅 Schedule: Branch creation - "every weekend" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever MR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this MR and you won't be reminded about this update again.


  • If you want to rebase/retry this MR, check this box

This MR has been generated by Renovate Bot.

Edited by renovate

Merge request reports

Loading