feat: Universal Local LLM & Custom Endpoint Support (Ollama, LM Studio, vLLM, etc.)#228
feat: Universal Local LLM & Custom Endpoint Support (Ollama, LM Studio, vLLM, etc.)#228sreenivas134 wants to merge 1 commit into
Conversation
|
Thanks for the contribution. We are going to close this as redundant with LiteLLM's existing functionality. PageIndex already routes LLM calls through LiteLLM, and LiteLLM already We do not want to duplicate LiteLLM provider/endpoint routing inside PageIndex with ad-hoc CLI flags, environment-variable mutation, global state, or model- A future contribution would be more useful if it documents the recommended LiteLLM usage, or adds a minimal PageIndex config pass-through for |
This PR introduces comprehensive support for local LLMs and custom API endpoints. While existing proposals often focus on specific providers (like Ollama), this implementation leverages LiteLLM's full capabilities to provide a universal, config-driven interface that works with any OpenAI-compatible or local provider (LM Studio, vLLM, LocalAI, Ollama, etc.) with minimal configuration.
✨ Key Features
Verification