Providers
providers overview
22+ AI providers with a pluggable trait system
how providers work
every provider implements the Provider trait. swap providers with a config change, zero code changes.
#[async_trait]
pub trait Provider: Send + Sync {
async fn chat(&self, messages: &[Message], tools: &[Tool]) -> Result<Response>;
fn name(&self) -> &str;
}
supported providers
| provider | config name | models |
|---|---|---|
| OpenRouter | openrouter | claude, gpt-4, llama, gemini, etc. |
| Anthropic | anthropic | claude-3.5-sonnet, claude-3-opus |
| OpenAI | openai | gpt-4o, gpt-4-turbo |
| Ollama | ollama | any local model |
| Groq | groq | llama, mixtral |
| Mistral | mistral | mistral-large, mistral-medium |
| xAI | xai | grok |
| DeepSeek | deepseek | deepseek-chat, deepseek-coder |
| Fireworks | fireworks | llama, mixtral |
| Perplexity | perplexity | sonar models |
| Cohere | cohere | command-r |
| Cloudflare AI | cloudflare | workers AI models |
| AWS Bedrock | bedrock | claude, titan |
| Custom | custom:https://... | any OpenAI-compatible API |
reliability
all providers use:
- exponential backoff with ±25% random jitter to prevent thundering herd
- DashMap response cache with 60s TTL so identical prompts don't burn API credits
- SSRF validation on provider URLs (blocks private IP ranges)
switching providers
# via CLI flag
baihu agent -p anthropic -m "hello"
# via config
default_provider = "openrouter"
# via env var
BAIHU_PROVIDER=ollama baihu agent