We use LiteLLM and it is a bit of a dumpster fire of enterprise features and bugs. I can't even update the budget on keys in the UI (enterprise feature, although it may be a bug that it is marked as such). I can still update budgets through the API, but the API is a bit of a mess as well. Then we've ran into a lot of bugs like the UI DDOSing itself when the retry mechanism broke and it just started spamming API requests. And then basic features like the cleanup of old logs is an enterprise feature.
We are actively looking to switch away from it, so it was nice to stumble on a post like this. Something so simple as a proxy with budgeting for keys should not be such a tangled mess.
Are there other alternatives you have been looking at? I’m just getting started looking at these LLM gateways. I was under the impression that LiteLLM was pretty popular but you are not the only one here with negative things to say about it.
I'm currently using apisix its ai rate limits are fine and the webui is a little json heavy but got me going on load balancing a bunch of models across ollama installs
I was looking for a version of a proxy that could maximize throughput to each LLM based on its limits. Basically max requests and input/output tokens per second.
I couldn't find something, so I rolled a version together based on redis and job queues. It works decently well, but I'd prefer to use something better if it exists.
Does anyone know of something like this that isn't completely over engineered / abstracted?
I'm conflicted on what Mozilla is doing here. On the one hand, it is nice that they are getting involved but com'on, dont you all have Firefox to work on?
This is a classic case of an over enthusiastic engineer who says yes / raises hand to everything, but doesnt do any one thing properly. At some point, you have to sit down and tell them to focus on one thing and do it properly.
Mozilla spun up a whole new entity (Mozilla.ai) to do AI stuff, so doing AI stuff outside of Firefox is already baked into the equation, whatever you think of this particular thing.
They're dumping competition on two other open source python libraries LiteLLM and simonw's llm. Unlike these two, Mozilla's any-llm doesn't have to make money. I'm sure simonw will be welcoming because he's a friendly kind of guy, but it might seem frustrating to LiteLLM which has a paid offering, for which they'd prefer organic competition rather than whatever magic 8 ball Mozilla uses.
I don’t think those are comparable. simonw's llm Has a python SDK, but it’s very much CLI first. Light LLM is very much about the SDK. You can wrap some agent SDK’s around it, like Gemini, but that’s for agents not work flows. I can’t really think of them as in the same category.
Interested to see how this stacks up against Bifrost (fast but many features paywalled) and LiteLLM Proxy (featureful but garbage code quality). Especially if it gets a web admin / reporting frontend and high availability.
We are just now looking into LLM Gateways and LiteLLM was one I was considering looking into. I’m curious to hear more about what makes the code quality garbage.
I've deployed LiteLLM proxy in a number of locations and we're looking to swap it out (probably to Bifrost), we've seen many bugs with it that never should have made it to a release. Most stem from poor code quality or what I'd classify as poor development practises. It's also slow, it doesn't scale well and adds a lot of latency.
Bugs include but are not limited to multiple ways budget limits aren't enforced, parameter handling issues, configuration / state mismatches etc...
What makes this worse is if you come to the devs with the problem, a solution and even a PR it's very difficult to get them to understand or action it - let alone see critical things like major budget blowouts as a priority.
How do you like bugs where tools are not working, but only for Ollama provider and only when streaming is enabled? This is one of the real instances I had to debug with LiteLLM.
I personally had no issues using the client libs, my only complaint was that they only offer official Python ones would love to see them publish a typescript one
This service (llm proxy to all providers) are a dime-a-dozen
This one has very little on monitoring and no reference to OTEL in the docs
Which self-hosted one would you recommend?
LiteLLM is one of the most popular solutions. You would self host the gateway
We use LiteLLM and it is a bit of a dumpster fire of enterprise features and bugs. I can't even update the budget on keys in the UI (enterprise feature, although it may be a bug that it is marked as such). I can still update budgets through the API, but the API is a bit of a mess as well. Then we've ran into a lot of bugs like the UI DDOSing itself when the retry mechanism broke and it just started spamming API requests. And then basic features like the cleanup of old logs is an enterprise feature.
We are actively looking to switch away from it, so it was nice to stumble on a post like this. Something so simple as a proxy with budgeting for keys should not be such a tangled mess.
Are there other alternatives you have been looking at? I’m just getting started looking at these LLM gateways. I was under the impression that LiteLLM was pretty popular but you are not the only one here with negative things to say about it.
I am planning to try any-llm-gateway that this post is about. We don't need anything fancy, so it seems that this might cover our needs.
I'm currently using apisix its ai rate limits are fine and the webui is a little json heavy but got me going on load balancing a bunch of models across ollama installs
I was looking for a version of a proxy that could maximize throughput to each LLM based on its limits. Basically max requests and input/output tokens per second.
I couldn't find something, so I rolled a version together based on redis and job queues. It works decently well, but I'd prefer to use something better if it exists.
Does anyone know of something like this that isn't completely over engineered / abstracted?
Feels like this is carving out a middle layer, simpler than other gateways out there, but way more practical than just a unified client library.
I'm conflicted on what Mozilla is doing here. On the one hand, it is nice that they are getting involved but com'on, dont you all have Firefox to work on?
This is a classic case of an over enthusiastic engineer who says yes / raises hand to everything, but doesnt do any one thing properly. At some point, you have to sit down and tell them to focus on one thing and do it properly.
Mozilla spun up a whole new entity (Mozilla.ai) to do AI stuff, so doing AI stuff outside of Firefox is already baked into the equation, whatever you think of this particular thing.
They're dumping competition on two other open source python libraries LiteLLM and simonw's llm. Unlike these two, Mozilla's any-llm doesn't have to make money. I'm sure simonw will be welcoming because he's a friendly kind of guy, but it might seem frustrating to LiteLLM which has a paid offering, for which they'd prefer organic competition rather than whatever magic 8 ball Mozilla uses.
I don’t think those are comparable. simonw's llm Has a python SDK, but it’s very much CLI first. Light LLM is very much about the SDK. You can wrap some agent SDK’s around it, like Gemini, but that’s for agents not work flows. I can’t really think of them as in the same category.
There is also PydanticAI Gateway (https://ai.pydantic.dev/gateway/). I use it with the PydanticAI framework and it's quite nice.
Thoughts on any-llm-gateway versus litellm-proxy?
litellm is a great library, but one team using litellm-proxy reported having many issues with it to me. I haven't tried it yet.
Yeah, I wonder what gaps in Litellm Proxy made Mozilla want to even do this.
Interested to see how this stacks up against Bifrost (fast but many features paywalled) and LiteLLM Proxy (featureful but garbage code quality). Especially if it gets a web admin / reporting frontend and high availability.
We are just now looking into LLM Gateways and LiteLLM was one I was considering looking into. I’m curious to hear more about what makes the code quality garbage.
I've deployed LiteLLM proxy in a number of locations and we're looking to swap it out (probably to Bifrost), we've seen many bugs with it that never should have made it to a release. Most stem from poor code quality or what I'd classify as poor development practises. It's also slow, it doesn't scale well and adds a lot of latency.
Bugs include but are not limited to multiple ways budget limits aren't enforced, parameter handling issues, configuration / state mismatches etc...
What makes this worse is if you come to the devs with the problem, a solution and even a PR it's very difficult to get them to understand or action it - let alone see critical things like major budget blowouts as a priority.
How do you like bugs where tools are not working, but only for Ollama provider and only when streaming is enabled? This is one of the real instances I had to debug with LiteLLM.
I personally had no issues using the client libs, my only complaint was that they only offer official Python ones would love to see them publish a typescript one
[flagged]
[flagged]