Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

OpenClaw supports Mistral for both text/image model routing (mistral/...) and audio transcription via Voxtral in media understanding. Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").
  • Provider: mistral
  • Auth: MISTRAL_API_KEY
  • API: Mistral Chat Completions (https://api.mistral.ai/v1)

Getting started

1

Get your API key

Create an API key in the Mistral Console.
2

Run onboarding

openclaw onboard --auth-choice mistral-api-key
Or pass the key directly:
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
3

Set a default model

{
  env: { MISTRAL_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}
4

Verify the model is available

openclaw models list --provider mistral

Built-in LLM catalog

OpenClaw currently ships this bundled Mistral catalog:
Model refInputContextMax outputNotes
mistral/mistral-large-latesttext, image262,14416,384Default model
mistral/mistral-medium-2508text, image262,1448,192Mistral Medium 3.1
mistral/mistral-small-latesttext, image128,00016,384Mistral Small 4; adjustable reasoning via API reasoning_effort
mistral/pixtral-large-latesttext, image128,00032,768Pixtral
mistral/codestral-latesttext256,0004,096Coding
mistral/devstral-medium-latesttext262,14432,768Devstral 2
mistral/magistral-smalltext128,00040,000Reasoning-enabled

Audio transcription (Voxtral)

Use Voxtral for batch audio transcription through the media understanding pipeline.
{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}
The media transcription path uses /v1/audio/transcriptions. The default audio model for Mistral is voxtral-mini-latest.

Voice Call streaming STT

The bundled mistral plugin registers Voxtral Realtime as a Voice Call streaming STT provider.
SettingConfig pathDefault
API keyplugins.entries.voice-call.config.streaming.providers.mistral.apiKeyFalls back to MISTRAL_API_KEY
Model...mistral.modelvoxtral-mini-transcribe-realtime-2602
Encoding...mistral.encodingpcm_mulaw
Sample rate...mistral.sampleRate8000
Target delay...mistral.targetStreamingDelayMs800
{
  plugins: {
    entries: {
      "voice-call": {
        config: {
          streaming: {
            enabled: true,
            provider: "mistral",
            providers: {
              mistral: {
                apiKey: "${MISTRAL_API_KEY}",
                targetStreamingDelayMs: 800,
              },
            },
          },
        },
      },
    },
  },
}
OpenClaw defaults Mistral realtime STT to pcm_mulaw at 8 kHz so Voice Call can forward Twilio media frames directly. Use encoding: "pcm_s16le" and a matching sampleRate only if your upstream stream is already raw PCM.

Advanced configuration

mistral/mistral-small-latest maps to Mistral Small 4 and supports adjustable reasoning on the Chat Completions API via reasoning_effort (none minimizes extra thinking in the output; high surfaces full thinking traces before the final answer).OpenClaw maps the session thinking level to Mistral’s API:
OpenClaw thinking levelMistral reasoning_effort
off / minimalnone
low / medium / high / xhigh / adaptive / maxhigh
Other bundled Mistral catalog models do not use this parameter. Keep using magistral-* models when you want Mistral’s native reasoning-first behavior.
Mistral can serve memory embeddings via /v1/embeddings (default model: mistral-embed).
{
  memorySearch: { provider: "mistral" },
}
  • Mistral auth uses MISTRAL_API_KEY.
  • Provider base URL defaults to https://api.mistral.ai/v1.
  • Onboarding default model is mistral/mistral-large-latest.
  • Z.AI uses Bearer auth with your API key.

Model selection

Choosing providers, model refs, and failover behavior.

Media understanding

Audio transcription setup and provider selection.