Skip to main content

NVIDIA

NVIDIA provides an OpenAI-compatible API at https://integrate.api.nvidia.com/v1 for open models for free. Authenticate with an API key from build.nvidia.com.

Getting started

1

Get your API key

Create an API key at build.nvidia.com.
2

Export the key and run onboarding

export NVIDIA_API_KEY="nvapi-..."
openclaw onboard --auth-choice skip
3

Set an NVIDIA model

openclaw models set nvidia/nvidia/nemotron-3-super-120b-a12b
If you pass --token instead of the env var, the value lands in shell history and ps output. Prefer the NVIDIA_API_KEY environment variable when possible.

Config example

{
  env: { NVIDIA_API_KEY: "nvapi-..." },
  models: {
    providers: {
      nvidia: {
        baseUrl: "https://integrate.api.nvidia.com/v1",
        api: "openai-completions",
      },
    },
  },
  agents: {
    defaults: {
      model: { primary: "nvidia/nvidia/nemotron-3-super-120b-a12b" },
    },
  },
}

Built-in catalog

Model refNameContextMax output
nvidia/nvidia/nemotron-3-super-120b-a12bNVIDIA Nemotron 3 Super 120B262,1448,192
nvidia/moonshotai/kimi-k2.5Kimi K2.5262,1448,192
nvidia/minimaxai/minimax-m2.5Minimax M2.5196,6088,192
nvidia/z-ai/glm5GLM 5202,7528,192

Advanced notes

The provider auto-enables when the NVIDIA_API_KEY environment variable is set. No explicit provider config is required beyond the key.
The bundled catalog is static. Costs default to 0 in source since NVIDIA currently offers free API access for the listed models.
NVIDIA uses the standard /v1 completions endpoint. Any OpenAI-compatible tooling should work out of the box with the NVIDIA base URL.
NVIDIA models are currently free to use. Check build.nvidia.com for the latest availability and rate-limit details.

Model selection

Choosing providers, model refs, and failover behavior.

Configuration reference

Full config reference for agents, models, and providers.