LLM Providers

SwarmOS supports 16 LLM providers. Select your provider and add your API key in Settings to power your agents.

How to Select a Provider

  1. Open Settings in the SwarmOS app
  2. Select your preferred provider and model
  3. Add your API key (stored encrypted per provider)
  4. Agents will use the selected provider for task execution

Most providers use the OpenAI-compatible API (same base URL + key pattern). SwarmOS sets OPENAI_BASE_URL and OPENAI_API_KEY for the agent runner. Claude uses its own CLI; Gemini uses a separate driver.

Supported Providers (16 total)

ProviderKey Models & Notes
Claude MaxUses local Claude CLI
Claude APIAnthropic api.anthropic.com
OpenAIapi.openai.com/v1
CodexUses local Codex CLI
MiniMaxapi.minimax.chat/v1
OpenRouterRoutes to Claude, GPT-4o, Gemini, DeepSeek R1, Llama 4
GroqLlama 3.3 70B (ultra-fast), Mixtral 8x7B
DeepSeekV3 (reasoning), R1 (chain-of-thought)
TogetherLlama 3.3 70B Turbo, Qwen 2.5 72B
MistralMistral Large, Codestral (code)
FireworksLlama 3.3 70B (optimized)
PerplexitySonar Pro (search-grounded)
xAIGrok 3
OllamaLocal — Llama 3.2, Code Llama, no API costs
GeminiGoogle generativelanguage.googleapis.com

Local & No-Cost Options

  • Ollama — Run models locally on localhost:11434/v1. Llama 3.2, Code Llama. No API key or usage costs.
  • Claude Max / Codex — Use local CLI installations. No remote API calls.

For high-volume or cost-sensitive workloads, Ollama is ideal. OpenRouter gives access to multiple models (Claude, GPT-4o, Gemini, DeepSeek R1, Llama 4) through a single API.

For guidance on getting started, see the Functions guide.