Skip to main content
Skill Workshop is experimental. It is disabled by default, its capture heuristics and reviewer prompts may change between releases, and automatic writes should be used only in trusted workspaces after reviewing pending-mode output first. Skill Workshop is procedural memory for workspace skills. It lets an agent turn reusable workflows, user corrections, hard-won fixes, and recurring pitfalls into SKILL.md files under:
<workspace>/skills/<skill-name>/SKILL.md
This is different from long-term memory:
  • Memory stores facts, preferences, entities, and past context.
  • Skills store reusable procedures the agent should follow on future tasks.
  • Skill Workshop is the bridge from a useful turn to a durable workspace skill, with safety checks and optional approval.
Skill Workshop is useful when the agent learns a procedure such as:
  • how to validate externally sourced animated GIF assets
  • how to replace screenshot assets and verify dimensions
  • how to run a repo-specific QA scenario
  • how to debug a recurring provider failure
  • how to repair a stale local workflow note
It is not intended for:
  • facts like “the user likes blue”
  • broad autobiographical memory
  • raw transcript archiving
  • secrets, credentials, or hidden prompt text
  • one-off instructions that will not repeat

Default state

The bundled plugin is experimental and disabled by default unless it is explicitly enabled in plugins.entries.skill-workshop. The plugin manifest does not set enabledByDefault: true. The enabled: true default inside the plugin config schema applies only after the plugin entry has already been selected and loaded. Experimental means:
  • the plugin is supported enough for opt-in testing and dogfooding
  • proposal storage, reviewer thresholds, and capture heuristics can evolve
  • pending approval is the recommended starting mode
  • auto apply is for trusted personal/workspace setups, not shared or hostile input-heavy environments

Enable

Minimal safe config:
{
  plugins: {
    entries: {
      "skill-workshop": {
        enabled: true,
        config: {
          autoCapture: true,
          approvalPolicy: "pending",
          reviewMode: "hybrid",
        },
      },
    },
  },
}
With this config:
  • the skill_workshop tool is available
  • explicit reusable corrections are queued as pending proposals
  • threshold-based reviewer passes can propose skill updates
  • no skill file is written until a pending proposal is applied
Use automatic writes only in trusted workspaces:
{
  plugins: {
    entries: {
      "skill-workshop": {
        enabled: true,
        config: {
          autoCapture: true,
          approvalPolicy: "auto",
          reviewMode: "hybrid",
        },
      },
    },
  },
}
approvalPolicy: "auto" still uses the same scanner and quarantine path. It does not apply proposals with critical findings.

Configuration

KeyDefaultRange / valuesMeaning
enabledtruebooleanEnables the plugin after the plugin entry is loaded.
autoCapturetruebooleanEnables post-turn capture/review on successful agent turns.
approvalPolicy"pending""pending", "auto"Queue proposals or write safe proposals automatically.
reviewMode"hybrid""off", "heuristic", "llm", "hybrid"Chooses explicit correction capture, LLM reviewer, both, or neither.
reviewInterval151..200Run reviewer after this many successful turns.
reviewMinToolCalls81..500Run reviewer after this many observed tool calls.
reviewTimeoutMs450005000..180000Timeout for the embedded reviewer run.
maxPending501..200Max pending/quarantined proposals kept per workspace.
maxSkillBytes400001024..200000Max generated skill/support file size.
Recommended profiles:
// Conservative: explicit tool use only, no automatic capture.
{
  autoCapture: false,
  approvalPolicy: "pending",
  reviewMode: "off",
}
// Review-first: capture automatically, but require approval.
{
  autoCapture: true,
  approvalPolicy: "pending",
  reviewMode: "hybrid",
}
// Trusted automation: write safe proposals immediately.
{
  autoCapture: true,
  approvalPolicy: "auto",
  reviewMode: "hybrid",
}
// Low-cost: no reviewer LLM call, only explicit correction phrases.
{
  autoCapture: true,
  approvalPolicy: "pending",
  reviewMode: "heuristic",
}

Capture paths

Skill Workshop has three capture paths.

Tool suggestions

The model can call skill_workshop directly when it sees a reusable procedure or when the user asks it to save/update a skill. This is the most explicit path and works even with autoCapture: false.

Heuristic capture

When autoCapture is enabled and reviewMode is heuristic or hybrid, the plugin scans successful turns for explicit user correction phrases:
  • next time
  • from now on
  • remember to
  • make sure to
  • always ... use/check/verify/record/save/prefer
  • prefer ... when/for/instead/use
  • when asked
The heuristic creates a proposal from the latest matching user instruction. It uses topic hints to choose skill names for common workflows:
  • animated GIF tasks -> animated-gif-workflow
  • screenshot or asset tasks -> screenshot-asset-workflow
  • QA or scenario tasks -> qa-scenario-workflow
  • GitHub PR tasks -> github-pr-workflow
  • fallback -> learned-workflows
Heuristic capture is intentionally narrow. It is for clear corrections and repeatable process notes, not for general transcript summarization.

LLM reviewer

When autoCapture is enabled and reviewMode is llm or hybrid, the plugin runs a compact embedded reviewer after thresholds are reached. The reviewer receives:
  • the recent transcript text, capped to the last 12,000 characters
  • up to 12 existing workspace skills
  • up to 2,000 characters from each existing skill
  • JSON-only instructions
The reviewer has no tools:
  • disableTools: true
  • toolsAllow: []
  • disableMessageTool: true
The reviewer returns either { "action": "none" } or one proposal. The action field is create, append, or replace — prefer append/replace when a relevant skill already exists; use create only when no existing skill fits. Example create:
{
  "action": "create",
  "skillName": "media-asset-qa",
  "title": "Media Asset QA",
  "reason": "Reusable animated media acceptance workflow",
  "description": "Validate externally sourced animated media before product use.",
  "body": "## Workflow\n\n- Verify true animation.\n- Record attribution.\n- Store a local approved copy.\n- Verify in product UI before final reply."
}
append adds section + body. replace swaps oldText for newText in the named skill.

Proposal lifecycle

Every generated update becomes a proposal with:
  • id
  • createdAt
  • updatedAt
  • workspaceDir
  • optional agentId
  • optional sessionId
  • skillName
  • title
  • reason
  • source: tool, agent_end, or reviewer
  • status
  • change
  • optional scanFindings
  • optional quarantineReason
Proposal statuses:
  • pending - waiting for approval
  • applied - written to <workspace>/skills
  • rejected - rejected by operator/model
  • quarantined - blocked by critical scanner findings
State is stored per workspace under the Gateway state directory:
<stateDir>/skill-workshop/<workspace-hash>.json
Pending and quarantined proposals are deduplicated by skill name and change payload. The store keeps the newest pending/quarantined proposals up to maxPending.

Tool reference

The plugin registers one agent tool:
skill_workshop

status

Count proposals by state for the active workspace.
{ "action": "status" }
Result shape:
{
  "workspaceDir": "/path/to/workspace",
  "pending": 1,
  "quarantined": 0,
  "applied": 3,
  "rejected": 0
}

list_pending

List pending proposals.
{ "action": "list_pending" }
To list another status:
{ "action": "list_pending", "status": "applied" }
Valid status values:
  • pending
  • applied
  • rejected
  • quarantined

list_quarantine

List quarantined proposals.
{ "action": "list_quarantine" }
Use this when automatic capture appears to do nothing and the logs mention skill-workshop: quarantined <skill>.

inspect

Fetch a proposal by id.
{
  "action": "inspect",
  "id": "proposal-id"
}

suggest

Create a proposal. With approvalPolicy: "pending" (default), this queues instead of writing.
{
  "action": "suggest",
  "skillName": "animated-gif-workflow",
  "title": "Animated GIF Workflow",
  "reason": "User established reusable GIF validation rules.",
  "description": "Validate animated GIF assets before using them.",
  "body": "## Workflow\n\n- Verify the URL resolves to image/gif.\n- Confirm it has multiple frames.\n- Record attribution and license.\n- Avoid hotlinking when a local asset is needed."
}
{
  "action": "suggest",
  "apply": true,
  "skillName": "animated-gif-workflow",
  "description": "Validate animated GIF assets before using them.",
  "body": "## Workflow\n\n- Verify true animation.\n- Record attribution."
}
{
  "action": "suggest",
  "apply": false,
  "skillName": "screenshot-asset-workflow",
  "description": "Screenshot replacement workflow.",
  "body": "## Workflow\n\n- Verify dimensions.\n- Optimize the PNG.\n- Run the relevant gate."
}
{
  "action": "suggest",
  "skillName": "qa-scenario-workflow",
  "section": "Workflow",
  "description": "QA scenario workflow.",
  "body": "- For media QA, verify generated assets render and pass final assertions."
}
{
  "action": "suggest",
  "skillName": "github-pr-workflow",
  "oldText": "- Check the PR.",
  "newText": "- Check unresolved review threads, CI status, linked issues, and changed files before deciding."
}

apply

Apply a pending proposal.
{
  "action": "apply",
  "id": "proposal-id"
}
apply refuses quarantined proposals:
quarantined proposal cannot be applied

reject

Mark a proposal rejected.
{
  "action": "reject",
  "id": "proposal-id"
}

write_support_file

Write a supporting file inside an existing or proposed skill directory. Allowed top-level support directories:
  • references/
  • templates/
  • scripts/
  • assets/
Example:
{
  "action": "write_support_file",
  "skillName": "release-workflow",
  "relativePath": "references/checklist.md",
  "body": "# Release Checklist\n\n- Run release docs.\n- Verify changelog.\n"
}
Support files are workspace-scoped, path-checked, byte-limited by maxSkillBytes, scanned, and written atomically.

Skill writes

Skill Workshop writes only under:
<workspace>/skills/<normalized-skill-name>/
Skill names are normalized:
  • lowercased
  • non [a-z0-9_-] runs become -
  • leading/trailing non-alphanumerics are removed
  • max length is 80 characters
  • final name must match [a-z0-9][a-z0-9_-]{1,79}
For create:
  • if the skill does not exist, Skill Workshop writes a new SKILL.md
  • if it already exists, Skill Workshop appends the body to ## Workflow
For append:
  • if the skill exists, Skill Workshop appends to the requested section
  • if it does not exist, Skill Workshop creates a minimal skill then appends
For replace:
  • the skill must already exist
  • oldText must be present exactly
  • only the first exact match is replaced
All writes are atomic and refresh the in-memory skills snapshot immediately, so the new or updated skill can become visible without a Gateway restart.

Safety model

Skill Workshop has a safety scanner on generated SKILL.md content and support files. Critical findings quarantine proposals:
Rule idBlocks content that…
prompt-injection-ignore-instructionstells the agent to ignore prior/higher instructions
prompt-injection-systemreferences system prompts, developer messages, or hidden instructions
prompt-injection-toolencourages bypassing tool permission/approval
shell-pipe-to-shellincludes curl/wget piped into sh, bash, or zsh
secret-exfiltrationappears to send env/process env data over the network
Warn findings are retained but do not block by themselves:
Rule idWarns on…
destructive-deletebroad rm -rf style commands
unsafe-permissionschmod 777 style permission use
Quarantined proposals:
  • keep scanFindings
  • keep quarantineReason
  • appear in list_quarantine
  • cannot be applied through apply
To recover from a quarantined proposal, create a new safe proposal with the unsafe content removed. Do not edit the store JSON by hand.

Prompt guidance

When enabled, Skill Workshop injects a short prompt section that tells the agent to use skill_workshop for durable procedural memory. The guidance emphasizes:
  • procedures, not facts/preferences
  • user corrections
  • non-obvious successful procedures
  • recurring pitfalls
  • stale/thin/wrong skill repair through append/replace
  • saving reusable procedure after long tool loops or hard fixes
  • short imperative skill text
  • no transcript dumps
The write mode text changes with approvalPolicy:
  • pending mode: queue suggestions; apply only after explicit approval
  • auto mode: apply safe workspace-skill updates when clearly reusable

Costs and runtime behavior

Heuristic capture does not call a model. LLM review uses an embedded run on the active/default agent model. It is threshold-based so it does not run on every turn by default. The reviewer:
  • uses the same configured provider/model context when available
  • falls back to runtime agent defaults
  • has reviewTimeoutMs
  • uses lightweight bootstrap context
  • has no tools
  • writes nothing directly
  • can only emit a proposal that goes through the normal scanner and approval/quarantine path
If the reviewer fails, times out, or returns invalid JSON, the plugin logs a warning/debug message and skips that review pass.

Operating patterns

Use Skill Workshop when the user says:
  • “next time, do X”
  • “from now on, prefer Y”
  • “make sure to verify Z”
  • “save this as a workflow”
  • “this took a while; remember the process”
  • “update the local skill for this”
Good skill text:
## Workflow

- Verify the GIF URL resolves to `image/gif`.
- Confirm the file has multiple frames.
- Record source URL, license, and attribution.
- Store a local copy when the asset will ship with the product.
- Verify the local asset renders in the target UI before final reply.
Poor skill text:
The user asked about a GIF and I searched two websites. Then one was blocked by
Cloudflare. The final answer said to check attribution.
Reasons the poor version should not be saved:
  • transcript-shaped
  • not imperative
  • includes noisy one-off details
  • does not tell the next agent what to do

Debugging

Check whether the plugin is loaded:
openclaw plugins list --enabled
Check proposal counts from an agent/tool context:
{ "action": "status" }
Inspect pending proposals:
{ "action": "list_pending" }
Inspect quarantined proposals:
{ "action": "list_quarantine" }
Common symptoms:
SymptomLikely causeCheck
Tool is unavailablePlugin entry is not enabledplugins.entries.skill-workshop.enabled and openclaw plugins list
No automatic proposal appearsautoCapture: false, reviewMode: "off", or thresholds not metConfig, proposal status, Gateway logs
Heuristic did not captureUser wording did not match correction patternsUse explicit skill_workshop.suggest or enable LLM reviewer
Reviewer did not create a proposalReviewer returned none, invalid JSON, or timed outGateway logs, reviewTimeoutMs, thresholds
Proposal is not appliedapprovalPolicy: "pending"list_pending, then apply
Proposal disappeared from pendingDuplicate proposal reused, max pending pruning, or was applied/rejected/quarantinedstatus, list_pending with status filters, list_quarantine
Skill file exists but model misses itSkill snapshot not refreshed or skill gating excludes itopenclaw skills status and workspace skill eligibility
Relevant logs:
  • skill-workshop: queued <skill>
  • skill-workshop: applied <skill>
  • skill-workshop: quarantined <skill>
  • skill-workshop: heuristic capture skipped: ...
  • skill-workshop: reviewer skipped: ...
  • skill-workshop: reviewer found no update

QA scenarios

Repo-backed QA scenarios:
  • qa/scenarios/plugins/skill-workshop-animated-gif-autocreate.md
  • qa/scenarios/plugins/skill-workshop-pending-approval.md
  • qa/scenarios/plugins/skill-workshop-reviewer-autonomous.md
Run the deterministic coverage:
pnpm openclaw qa suite \
  --scenario skill-workshop-animated-gif-autocreate \
  --scenario skill-workshop-pending-approval \
  --concurrency 1
Run reviewer coverage:
pnpm openclaw qa suite \
  --scenario skill-workshop-reviewer-autonomous \
  --concurrency 1
The reviewer scenario is intentionally separate because it enables reviewMode: "llm" and exercises the embedded reviewer pass.

When not to enable auto apply

Avoid approvalPolicy: "auto" when:
  • the workspace contains sensitive procedures
  • the agent is working on untrusted input
  • skills are shared across a broad team
  • you are still tuning prompts or scanner rules
  • the model frequently handles hostile web/email content
Use pending mode first. Switch to auto mode only after reviewing the kind of skills the agent proposes in that workspace.