Firecrawl
OpenClaw can use Firecrawl in three ways:- as the
web_searchprovider - as explicit plugin tools:
firecrawl_searchandfirecrawl_scrape - as a fallback extractor for
web_fetch
Get an API key
- Create a Firecrawl account and generate an API key.
- Store it in config or set
FIRECRAWL_API_KEYin the gateway environment.
Configure Firecrawl search
- Choosing Firecrawl in onboarding or
openclaw configure --section webenables the bundled Firecrawl plugin automatically. web_searchwith Firecrawl supportsqueryandcount.- For Firecrawl-specific controls like
sources,categories, or result scraping, usefirecrawl_search.
Configure Firecrawl scrape + web_fetch fallback
firecrawl.enableddefaults totrueunless explicitly set tofalse.- Firecrawl fallback attempts run only when an API key is available (
tools.web.fetch.firecrawl.apiKeyorFIRECRAWL_API_KEY). maxAgeMscontrols how old cached results can be (ms). Default is 2 days.
firecrawl_scrape reuses the same tools.web.fetch.firecrawl.* settings and env vars.
Firecrawl plugin tools
firecrawl_search
Use this when you want Firecrawl-specific search controls instead of generic web_search.
Core parameters:
querycountsourcescategoriesscrapeResultstimeoutSeconds
firecrawl_scrape
Use this for JS-heavy or bot-protected pages where plain web_fetch is weak.
Core parameters:
urlextractModemaxCharsonlyMainContentmaxAgeMsproxystoreInCachetimeoutSeconds
Stealth / bot circumvention
Firecrawl exposes a proxy mode parameter for bot circumvention (basic, stealth, or auto).
OpenClaw always uses proxy: "auto" plus storeInCache: true for Firecrawl requests.
If proxy is omitted, Firecrawl defaults to auto. auto retries with stealth proxies if a basic attempt fails, which may use more credits
than basic-only scraping.
How web_fetch uses Firecrawl
web_fetch extraction order:
- Readability (local)
- Firecrawl (if configured)
- Basic HTML cleanup (last fallback)