Skip to main content

Arcee AI

Arcee AI provides access to the Trinity family of mixture-of-experts models through an OpenAI-compatible API. All Trinity models are Apache 2.0 licensed. Arcee AI models can be accessed directly via the Arcee platform or through OpenRouter.
  • Provider: arcee
  • Auth: ARCEEAI_API_KEY (direct) or OPENROUTER_API_KEY (via OpenRouter)
  • API: OpenAI-compatible
  • Base URL: https://api.arcee.ai/api/v1 (direct) or https://openrouter.ai/api/v1 (OpenRouter)

Quick start

  1. Get an API key from Arcee AI or OpenRouter.
  2. Set the API key (recommended: store it for the Gateway):
# Direct (Arcee platform)
openclaw onboard --auth-choice arceeai-api-key

# Via OpenRouter
openclaw onboard --auth-choice arceeai-openrouter
  1. Set a default model:
{
  agents: {
    defaults: {
      model: { primary: "arcee/trinity-large-thinking" },
    },
  },
}

Non-interactive example

# Direct (Arcee platform)
openclaw onboard --non-interactive \
  --mode local \
  --auth-choice arceeai-api-key \
  --arceeai-api-key "$ARCEEAI_API_KEY"

# Via OpenRouter
openclaw onboard --non-interactive \
  --mode local \
  --auth-choice arceeai-openrouter \
  --openrouter-api-key "$OPENROUTER_API_KEY"

Environment note

If the Gateway runs as a daemon (launchd/systemd), make sure ARCEEAI_API_KEY (or OPENROUTER_API_KEY) is available to that process (for example, in ~/.openclaw/.env or via env.shellEnv).

Built-in catalog

OpenClaw currently ships this bundled Arcee catalog:
Model refNameInputContextCost (in/out per 1M)Notes
arcee/trinity-large-thinkingTrinity Large Thinkingtext256K0.25/0.25 / 0.90Default model; reasoning enabled
arcee/trinity-large-previewTrinity Large Previewtext128K0.25/0.25 / 1.00General-purpose; 400B params, 13B active
arcee/trinity-miniTrinity Mini 26Btext128K0.045/0.045 / 0.15Fast and cost-efficient; function calling
The same model refs work for both direct and OpenRouter setups (for example arcee/trinity-large-thinking). The onboarding preset sets arcee/trinity-large-thinking as the default model.

Supported features

  • Streaming
  • Tool use / function calling
  • Structured output (JSON mode and JSON schema)
  • Extended thinking (Trinity Large Thinking)