You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw the recent 0.0.12 release notes and it surprised me how community contributions have been reimplemented. Particularly LiteLLM integrations as per #524 (comment) (previously contributed at #318). Is this happening in a common manner?
Can you clarify why @rm-openai? What is the policy for reimplementing already existing proposals without technical argumentation (as opposed as working together with the community to converge)? Couldn't you simply have stated that you preferred a LiteLLM Python API integration as opposed to the Proxy Server provided (which is less intrusive) and work together in this direction? Isn’t this against the interest of precisely open sourcing this repo and co-maintaining it with the community? It appears to me the reimplementation in this case is not technically advantageous and leaves aside relevant aspects which hints towards a lock-in approach.
Bottom-line, I very much dislike this approach and request clarification. It appears to me the only intent here is to force developers by default to collect their data (user/dev data) while integrating LiteLLM (and yes, I know you provide a default config and can be disabled).
Please clarify both technically and strategically, this will be relevant to me and my organizations and be used to decide whether or not we should continue investing time into contributing back to this project.
The text was updated successfully, but these errors were encountered:
vmayoral
changed the title
Why are tech contributions ignored, shall we simply stop 🛑 contributing back?
Why are tech contributions ignored and reimplemented without discussion and/or consensus, shall we simply stop 🛑 contributing back?
Apr 22, 2025
Hey @vmayoral - I'm really sorry about this. Didn't mean to ignore your PR by any means, I just missed it because of the amount of PRs that have been submitted.
Would definitely be interested in discussing it though. I'm not as familiar with the litellm proxy server; I added support for the litellm library, because it enables using more models in the same way without needing to know about the details of how litellm works. You just need a model name.
For your PR - it currently looks like it's all code in the examples directory. So would it be correct to say that it's more of an integration example rather than a change to the core SDK?
Thanks @rm-openai for your answer. Understood. No worries.
For your PR - it currently looks like it's all code in the examples directory. So would it be correct to say that it's more of an integration example rather than a change to the core SDK?
That is correct and was meant as an easy/quick way to integrate LiteLLM without disrupting the core SDK, which I would've been happy to modify accordingly and would've been happy to contribute upstream (in fact, internally, is what we use). Let me rebase our existing work on top of your LiteLLM integration and see how things can move forward aligned. I'll close this ticket for now.
I saw the recent 0.0.12 release notes and it surprised me how community contributions have been reimplemented. Particularly LiteLLM integrations as per #524 (comment) (previously contributed at #318). Is this happening in a common manner?
Can you clarify why @rm-openai? What is the policy for reimplementing already existing proposals without technical argumentation (as opposed as working together with the community to converge)? Couldn't you simply have stated that you preferred a LiteLLM Python API integration as opposed to the Proxy Server provided (which is less intrusive) and work together in this direction? Isn’t this against the interest of precisely open sourcing this repo and co-maintaining it with the community? It appears to me the reimplementation in this case is not technically advantageous and leaves aside relevant aspects which hints towards a lock-in approach.
Bottom-line, I very much dislike this approach and request clarification. It appears to me the only intent here is to force developers by default to collect their data (user/dev data) while integrating LiteLLM (and yes, I know you provide a default config and can be disabled).
Please clarify both technically and strategically, this will be relevant to me and my organizations and be used to decide whether or not we should continue investing time into contributing back to this project.
The text was updated successfully, but these errors were encountered: