LLM Providers
SwarmOS supports 16 LLM providers. Select your provider and add your API key in Settings to power your agents.
How to Select a Provider
- Open Settings in the SwarmOS app
- Select your preferred provider and model
- Add your API key (stored encrypted per provider)
- Agents will use the selected provider for task execution
Most providers use the OpenAI-compatible API (same base URL + key pattern). SwarmOS sets OPENAI_BASE_URL and OPENAI_API_KEY for the agent runner. Claude uses its own CLI; Gemini uses a separate driver.
Supported Providers (16 total)
| Provider | Key Models & Notes |
|---|---|
| Claude Max | Uses local Claude CLI |
| Claude API | Anthropic api.anthropic.com |
| OpenAI | api.openai.com/v1 |
| Codex | Uses local Codex CLI |
| MiniMax | api.minimax.chat/v1 |
| OpenRouter | Routes to Claude, GPT-4o, Gemini, DeepSeek R1, Llama 4 |
| Groq | Llama 3.3 70B (ultra-fast), Mixtral 8x7B |
| DeepSeek | V3 (reasoning), R1 (chain-of-thought) |
| Together | Llama 3.3 70B Turbo, Qwen 2.5 72B |
| Mistral | Mistral Large, Codestral (code) |
| Fireworks | Llama 3.3 70B (optimized) |
| Perplexity | Sonar Pro (search-grounded) |
| xAI | Grok 3 |
| Ollama | Local — Llama 3.2, Code Llama, no API costs |
| Gemini | Google generativelanguage.googleapis.com |
Local & No-Cost Options
- Ollama — Run models locally on
localhost:11434/v1. Llama 3.2, Code Llama. No API key or usage costs. - Claude Max / Codex — Use local CLI installations. No remote API calls.
For high-volume or cost-sensitive workloads, Ollama is ideal. OpenRouter gives access to multiple models (Claude, GPT-4o, Gemini, DeepSeek R1, Llama 4) through a single API.
For guidance on getting started, see the Functions guide.