LLM Model Usage
- Model performance reference:https://github.com/lenML/lenml-llm-leaderboard
- Model leadership:https://lenml.github.io/lenml-llm-leaderboard
Ollama provider
Default ollama server address: http://127.0.0.1:11434
and not need for api key
openai provider
you can use openai provider to access all openai models, such as gpt-4, gpt-4o, gpt-4o-mini, etc.
default openai server address: https://api.openai.com
and need for api key
when you use other proxy transmit provider, please change api url
anthropic provider
you can use anthropic provider to access all anthropic models such as claude, sonnet, etc.
default anthropic server address: https://api.anthropic.com
and need for api key
when you use other proxy transmit provider, please change api url
siliconflow provider
you can use siliconflow provider to access all siliconflow models such as Qwen/Qwen2.5-7B-Instruct, THUDM/glm-4-9b-chat.
default siliconflow server address: https://api.siliconflow.ai
and need for api key
it use openai official python sdk to access
openai-api-compatible provider
Also support openai api compatible, you can use openai official python sdk to access. You need to set api key and api url and a custom name for provider.