Skip to main content

Video Generation

The video_generate tool lets the agent create videos using your configured providers. In agent sessions, OpenClaw starts video generation as a background task, tracks it in the task ledger, then wakes the agent again when the clip is ready so the agent can post the finished video back into the original channel.
The tool only appears when at least one video-generation provider is available. If you don’t see video_generate in your agent’s tools, configure agents.defaults.videoGenerationModel or set up a provider API key.
In agent sessions, video_generate returns immediately with a task id/run id. The actual provider job continues in the background. When it finishes, OpenClaw wakes the same session with an internal completion event so the agent can send a normal follow-up plus the generated video attachment.

Quick start

  1. Set an API key for at least one provider (for example OPENAI_API_KEY, GEMINI_API_KEY, MODELSTUDIO_API_KEY, QWEN_API_KEY, or RUNWAYML_API_SECRET).
  2. Optionally set your preferred model:
{
  agents: {
    defaults: {
      videoGenerationModel: {
        primary: "qwen/wan2.6-t2v",
      },
    },
  },
}
  1. Ask the agent: “Generate a 5-second cinematic video of a friendly lobster surfing at sunset.”
The agent calls video_generate automatically. No tool allow-listing needed — it’s enabled by default when a provider is available. For direct synchronous contexts without a session-backed agent run, the tool still falls back to inline generation and returns the final media path in the tool result.

Supported providers

ProviderDefault modelReference inputsAPI key
Alibabawan2.6-t2vYes, remote URLsMODELSTUDIO_API_KEY, DASHSCOPE_API_KEY, QWEN_API_KEY
BytePlusseedance-1-0-lite-t2v-2504281 imageBYTEPLUS_API_KEY
falfal-ai/minimax/video-01-live1 imageFAL_KEY
Googleveo-3.1-fast-generate-preview1 image or 1 videoGEMINI_API_KEY, GOOGLE_API_KEY
MiniMaxMiniMax-Hailuo-2.31 imageMINIMAX_API_KEY
OpenAIsora-21 image or 1 videoOPENAI_API_KEY
Qwenwan2.6-t2vYes, remote URLsQWEN_API_KEY, MODELSTUDIO_API_KEY, DASHSCOPE_API_KEY
Runwaygen4.51 image or 1 videoRUNWAYML_API_SECRET, RUNWAY_API_KEY
TogetherWan-AI/Wan2.2-T2V-A14B1 imageTOGETHER_API_KEY
xAIgrok-imagine-video1 image or 1 videoXAI_API_KEY
Use action: "list" to inspect available providers and models at runtime:
/tool video_generate action=list

Tool parameters

ParameterTypeDescription
promptstringVideo generation prompt (required for action: "generate")
actionstring"generate" (default) or "list" to inspect providers
modelstringProvider/model override, e.g. qwen/wan2.6-t2v
imagestringSingle reference image path or URL
imagesstring[]Multiple reference images (up to 5)
videostringSingle reference video path or URL
videosstring[]Multiple reference videos (up to 4)
sizestringSize hint when the provider supports it
aspectRatiostringAspect ratio: 1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9
resolutionstringResolution hint: 480P, 720P, or 1080P
durationSecondsnumberTarget duration in seconds. OpenClaw may round to the nearest provider-supported value
audiobooleanEnable generated audio when the provider supports it
watermarkbooleanToggle provider watermarking when supported
filenamestringOutput filename hint
Not all providers support all parameters. Unsupported optional overrides are ignored on a best-effort basis and reported back in the tool result as a warning. Hard capability limits such as too many reference inputs still fail before submission. When a provider or model only supports a discrete set of video lengths, OpenClaw rounds durationSeconds to the nearest supported value and reports the normalized duration in the tool result.

Async behavior

  • Session-backed agent runs: video_generate creates a background task, returns a started/task response immediately, and posts the finished video later in a follow-up agent message.
  • Task tracking: use openclaw tasks list / openclaw tasks show <taskId> to inspect queued, running, and terminal status for the generation.
  • Completion wake: OpenClaw injects an internal completion event back into the same session so the model can write the user-facing follow-up itself.
  • No-session fallback: direct/local contexts without a real agent session still run inline and return the final video result in the same turn.

Configuration

Model selection

{
  agents: {
    defaults: {
      videoGenerationModel: {
        primary: "qwen/wan2.6-t2v",
        fallbacks: ["qwen/wan2.6-r2v-flash"],
      },
    },
  },
}

Provider selection order

When generating a video, OpenClaw tries providers in this order:
  1. model parameter from the tool call (if the agent specifies one)
  2. videoGenerationModel.primary from config
  3. videoGenerationModel.fallbacks in order
  4. Auto-detection — uses auth-backed provider defaults only:
    • current default provider first
    • remaining registered video-generation providers in provider-id order
If a provider fails, the next candidate is tried automatically. If all fail, the error includes details from each attempt.

Provider notes

  • Alibaba uses the DashScope / Model Studio async video endpoint and currently requires remote http(s) URLs for reference assets.
  • Google uses Gemini/Veo and supports a single image or video reference input.
  • MiniMax, Together, BytePlus, and fal currently support a single image reference input.
  • OpenAI uses the native video endpoint and currently defaults to sora-2.
  • Qwen supports image/video references, but the upstream DashScope video endpoint currently requires remote http(s) URLs for those references.
  • Runway uses the native async task API with GET /v1/tasks/{id} polling and currently defaults to gen4.5.
  • xAI uses the native xAI video API and supports text-to-video, image-to-video, and remote video edit/extend flows.
  • fal uses the queue-backed fal video flow for long-running jobs instead of a single blocking inference request.

Qwen reference inputs

The bundled Qwen provider supports text-to-video plus image/video reference modes, but the upstream DashScope video endpoint currently requires remote http(s) URLs for reference inputs. Local file paths and uploaded buffers are rejected up front instead of being silently ignored.