Groq
Groq provides ultra-fast inference on open-source models (Llama, Gemma, Mistral, and more) using custom LPU hardware. OpenClaw connects to Groq through its OpenAI-compatible API.| Property | Value |
|---|---|
| Provider | groq |
| Auth | GROQ_API_KEY |
| API | OpenAI-compatible |
Getting started
Config file example
Available models
Groq’s model catalog changes frequently. Runopenclaw models list | grep groq
to see currently available models, or check
console.groq.com/docs/models.
| Model | Notes |
|---|---|
| Llama 3.3 70B Versatile | General-purpose, large context |
| Llama 3.1 8B Instant | Fast, lightweight |
| Gemma 2 9B | Compact, efficient |
| Mixtral 8x7B | MoE architecture, strong reasoning |
Audio transcription
Groq also provides fast Whisper-based audio transcription. When configured as a media-understanding provider, OpenClaw uses Groq’swhisper-large-v3-turbo
model to transcribe voice messages through the shared tools.media.audio
surface.
Audio transcription details
Audio transcription details
| Property | Value |
|---|---|
| Shared config path | tools.media.audio |
| Default base URL | https://api.groq.com/openai/v1 |
| Default model | whisper-large-v3-turbo |
| API endpoint | OpenAI-compatible /audio/transcriptions |
Environment note
Environment note
If the Gateway runs as a daemon (launchd/systemd), make sure
GROQ_API_KEY is
available to that process (for example, in ~/.openclaw/.env or via
env.shellEnv).Related
Model selection
Choosing providers, model refs, and failover behavior.
Configuration reference
Full config schema including provider and audio settings.
Groq Console
Groq dashboard, API docs, and pricing.
Groq model list
Official Groq model catalog.