memory-lancedb is a bundled memory plugin that stores long-term memory in
LanceDB and uses embeddings for recall. It can automatically recall relevant
memories before a model turn and capture important facts after a response.
Use it when you want a local vector database for memory, need an
OpenAI-compatible embedding endpoint, or want to keep a memory database outside
the default built-in memory store.
memory-lancedb is an active memory plugin. Enable it by selecting the memory
slot with plugins.slots.memory = "memory-lancedb". Companion plugins such as
memory-wiki can run beside it, but only one plugin owns the active memory slot.
Quick start
{
plugins: {
slots: {
memory: "memory-lancedb",
},
entries: {
"memory-lancedb": {
enabled: true,
config: {
embedding: {
provider: "openai",
model: "text-embedding-3-small",
},
autoRecall: true,
autoCapture: false,
},
},
},
},
}
Restart the Gateway after changing plugin config:
Then verify the plugin is loaded:
Provider-backed embeddings
memory-lancedb can use the same memory embedding provider adapters as
memory-core. Set embedding.provider and omit embedding.apiKey to use the
provider’s configured auth profile, environment variable, or
models.providers.<provider>.apiKey.
{
plugins: {
slots: {
memory: "memory-lancedb",
},
entries: {
"memory-lancedb": {
enabled: true,
config: {
embedding: {
provider: "openai",
model: "text-embedding-3-small",
},
autoRecall: true,
},
},
},
},
}
This path works with provider auth profiles that expose embedding credentials.
For example, GitHub Copilot can be used when the Copilot profile/plan supports
embeddings:
{
plugins: {
slots: {
memory: "memory-lancedb",
},
entries: {
"memory-lancedb": {
enabled: true,
config: {
embedding: {
provider: "github-copilot",
model: "text-embedding-3-small",
},
},
},
},
},
}
OpenAI Codex / ChatGPT OAuth (openai-codex) is not an OpenAI Platform
embeddings credential. For OpenAI embeddings, use an OpenAI API key auth profile,
OPENAI_API_KEY, or models.providers.openai.apiKey. OAuth-only users can use
another embedding-capable provider such as GitHub Copilot or Ollama.
Ollama embeddings
For Ollama embeddings, prefer the bundled Ollama embedding provider. It uses the
native Ollama /api/embed endpoint and follows the same auth/base URL rules as
the Ollama provider documented in Ollama.
{
plugins: {
slots: {
memory: "memory-lancedb",
},
entries: {
"memory-lancedb": {
enabled: true,
config: {
embedding: {
provider: "ollama",
baseUrl: "http://127.0.0.1:11434",
model: "mxbai-embed-large",
dimensions: 1024,
},
recallMaxChars: 400,
autoRecall: true,
autoCapture: false,
},
},
},
},
}
Set dimensions for non-standard embedding models. OpenClaw knows the
dimensions for text-embedding-3-small and text-embedding-3-large; custom
models need the value in config so LanceDB can create the vector column.
For small local embedding models, lower recallMaxChars if you see context
length errors from the local server.
OpenAI-compatible providers
Some OpenAI-compatible embedding providers reject the encoding_format
parameter, while others ignore it and always return number[] vectors.
memory-lancedb therefore omits encoding_format on embedding requests and
accepts either float-array responses or base64-encoded float32 responses.
If you have a raw OpenAI-compatible embeddings endpoint that does not have a
bundled provider adapter, omit embedding.provider (or leave it as openai) and
set embedding.apiKey plus embedding.baseUrl. This preserves the direct
OpenAI-compatible client path.
Set embedding.dimensions for providers whose model dimensions are not built
in. For example, ZhiPu embedding-3 uses 2048 dimensions:
{
plugins: {
entries: {
"memory-lancedb": {
enabled: true,
config: {
embedding: {
apiKey: "${ZHIPU_API_KEY}",
baseUrl: "https://open.bigmodel.cn/api/paas/v4",
model: "embedding-3",
dimensions: 2048,
},
},
},
},
},
}
Recall and capture limits
memory-lancedb has two separate text limits:
| Setting | Default | Range | Applies to |
|---|
recallMaxChars | 1000 | 100-10000 | text sent to the embedding API for recall |
captureMaxChars | 500 | 100-10000 | assistant message length eligible for capture |
recallMaxChars controls auto-recall, the memory_recall tool, the
memory_forget query path, and openclaw ltm search. Auto-recall prefers the
latest user message from the turn and falls back to the full prompt only when no
user message is available. This keeps channel metadata and large prompt blocks
out of the embedding request.
captureMaxChars controls whether a response is short enough to be considered
for automatic capture. It does not cap recall query embeddings.
Commands
When memory-lancedb is the active memory plugin, it registers the ltm CLI
namespace:
openclaw ltm list
openclaw ltm search "project preferences"
openclaw ltm stats
Agents also get LanceDB memory tools from the active memory plugin:
memory_recall for LanceDB-backed recall
memory_store for saving important facts, preferences, decisions, and entities
memory_forget for removing matching memories
Storage
By default, LanceDB data lives under ~/.openclaw/memory/lancedb. Override the
path with dbPath:
{
plugins: {
entries: {
"memory-lancedb": {
enabled: true,
config: {
dbPath: "~/.openclaw/memory/lancedb",
embedding: {
apiKey: "${OPENAI_API_KEY}",
model: "text-embedding-3-small",
},
},
},
},
},
}
storageOptions accepts string key/value pairs for LanceDB storage backends and
supports ${ENV_VAR} expansion:
{
plugins: {
entries: {
"memory-lancedb": {
enabled: true,
config: {
dbPath: "s3://memory-bucket/openclaw",
storageOptions: {
access_key: "${AWS_ACCESS_KEY_ID}",
secret_key: "${AWS_SECRET_ACCESS_KEY}",
endpoint: "${AWS_ENDPOINT_URL}",
},
embedding: {
apiKey: "${OPENAI_API_KEY}",
model: "text-embedding-3-small",
},
},
},
},
},
}
Runtime dependencies
memory-lancedb depends on the native @lancedb/lancedb package. Packaged
OpenClaw installs first try the bundled runtime dependency and can repair the
plugin runtime dependency under OpenClaw state when the bundled import is not
available.
If an older install logs a missing dist/package.json or missing
@lancedb/lancedb error during plugin load, upgrade OpenClaw and restart the
Gateway.
If the plugin logs that LanceDB is unavailable on darwin-x64, use the default
memory backend on that machine, move the Gateway to a supported platform, or
disable memory-lancedb.
Troubleshooting
Input length exceeds the context length
This usually means the embedding model rejected the recall query:
memory-lancedb: recall failed: Error: 400 the input length exceeds the context length
Set a lower recallMaxChars, then restart the Gateway:
{
plugins: {
entries: {
"memory-lancedb": {
config: {
recallMaxChars: 400,
},
},
},
},
}
For Ollama, also verify the embedding server is reachable from the Gateway host:
curl http://127.0.0.1:11434/v1/embeddings \
-H "Content-Type: application/json" \
-d '{"model":"mxbai-embed-large","input":"hello"}'
Unsupported embedding model
Without dimensions, only the built-in OpenAI embedding dimensions are known.
For local or custom embedding models, set embedding.dimensions to the vector
size reported by that model.
Plugin loads but no memories appear
Check that plugins.slots.memory points at memory-lancedb, then run:
openclaw ltm stats
openclaw ltm search "recent preference"
If autoCapture is disabled, the plugin will recall existing memories but will
not automatically store new ones. Use the memory_store tool or enable
autoCapture if you want automatic capture.