Lernpfad
OpenClaw's built-in skills cover common workflows, and ClawHub hosts plenty more. But the ones that matter most tend to be the ones nobody has built yet: automations shaped around your own projects and tools.
This tutorial shows how to build two custom skills. The first wraps a Python script that converts Jupyter notebooks to Word documents. If you write in notebooks but hand off .docx files to editors or stakeholders, this turns a manual export into a slash command. The second generates images with Nano Banana Pro through the Replicate API, layering credential management and environment scoping on top of the basics.
The tutorial also covers Docker sandboxing, metadata gating, and publishing skills to ClawHub. For a broader view of the OpenClaw platform, see OpenClaw Projects: What You Can Build and our guide to the top agent skills.
What Are OpenClaw Skills?
Skills are how you add new behaviors to the OpenClaw agent. A skill could be as simple as a slash command that reformats code, or as involved as a multi-step workflow that reviews PRs and posts comments to Jira or Slack.
If you've used MCP (Model Context Protocol) servers in Claude Code or similar tools, skills fill a different role. MCP servers are separate processes exposing tools through a standardized protocol, which suits integrations needing persistent state or multiple tool endpoints.
Skills skip all of that: you write plain-language instructions that the agent reads and follows at runtime, making them faster to build when you just need one thing automated. The OpenClaw vs Claude Code comparison goes deeper into the trade-offs.
Hooks, the other extension point, fire automatically when something happens, like a tool call completing or the model generating a response. Skills sit idle until the user types a slash command or the agent decides one is relevant to the current task.

OpenClaw bundles 49 skills covering email, calendars, GitHub, browser automation, and more. The community has published thousands of additional ones on ClawHub. For background on how the platform has evolved, see the MoltBot to ClawdBot history.
Prerequisites
You'll need:
- OpenClaw installed and running via Telegram (install guide). Telegram's BotFather integration has the best slash command support, which is how you'll trigger skills throughout this tutorial.
- uv installed (both skills in this tutorial use it for Python dependencies)
- Comfort with the terminal, YAML, and Markdown
- A Replicate API token for the image generation skill (free signup, pay-per-use)
- A GitHub account at least one week old, if you want to publish to ClawHub
If you'd rather run OpenClaw with local models, the OpenClaw with Ollama tutorial covers that setup.
Building Your First OpenClaw Skill
This first skill wraps a Python script that converts Jupyter notebooks to Word documents, handling markdown formatting, code blocks, images, tables, and hyperlinks so the .docx output preserves the structure of the original notebook. If you regularly hand off notebook content to people who work in Word, this turns that manual export into a single slash command.
Create the skill folder in OpenClaw's managed skills directory:
mkdir -p ~/.openclaw/skills/notebook-to-docx
Skills in ~/.openclaw/skills are available across all your sessions. You can also place them inside a project at <project>/skills to scope them to that workspace, and when a name appears in both locations, the workspace copy wins over the managed one, which in turn overrides any bundled skill with the same name.
Writing the SKILL.md
Every skill needs one file: SKILL.md. YAML frontmatter at the top defines how OpenClaw loads the skill, and the markdown body below it contains the instructions the agent follows at runtime.
Create ~/.openclaw/skills/notebook-to-docx/SKILL.md, starting with the frontmatter:
---
name: notebook-to-docx
description: Convert Jupyter notebooks to Word documents with proper formatting
user-invocable: true
metadata: {"openclaw": {"requires": {"bins": ["uv"]}}}
---
name doubles as the slash command (/notebook-to-docx). description gives the agent a one-liner it uses to judge relevance to the current task. Setting user-invocable: true registers the slash command in your Telegram chat. The metadata JSON handles load-time gating: requires.bins tells OpenClaw to skip this skill if uv isn't on the system PATH rather than failing at runtime.
If you want the opposite direction, where the skill never fires unless you explicitly type the slash command, set disable-model-invocation: true.
Tip: YAML frontmatter only supports single-line values. Multi-line strings or block scalars will cause parse errors, which is why metadata is a single-line JSON object rather than nested YAML.
Below the frontmatter, add the instruction body:
# Notebook to DOCX Converter
Converts Jupyter notebooks (.ipynb) to Word documents (.docx) with proper formatting.
## Usage
Run the conversion script:
uv run --with nbformat --with python-docx --with Pillow python {baseDir}/notebook_to_docx.py <notebook_path> [output_path]
If output_path is not specified, creates a .docx file with the same name as the notebook.
## Features
- Markdown formatting preserved as Word styles (bold, italics, headings)
- Backticks preserved around inline code with monospace font
- Code blocks show triple backticks and language name, use Courier New font
- Non-code text uses Poppins font
- Images embedded with alt text
- Hyperlinks preserved and clickable
- Markdown tables converted to Word tables
## Requirements
- nbformat
- python-docx
- Pillow
{baseDir} is a template variable that resolves to the skill folder path at runtime, so you don't need to hardcode the location. That matters when someone else installs your skill in a different directory.
The uv run --with flags pull in the three libraries the script needs, keeping the skill self-contained rather than assuming those packages exist in the user's environment.
The supporting script
The Python script goes in the same folder as SKILL.md. At about 490 lines, it's too long to include here, so grab the full script from this gist and place it as notebook_to_docx.py in ~/.openclaw/skills/notebook-to-docx/. It covers everything listed in the Features section of the SKILL.md above.
Here's the entry point so you can see what it does at a high level:
def convert_notebook_to_docx(notebook_path, output_path=None):
notebook_path = Path(notebook_path)
if output_path is None:
output_path = notebook_path.with_suffix('.docx')
else:
output_path = Path(output_path)
with open(notebook_path, 'r', encoding='utf-8') as f:
nb = nbformat.read(f, as_version=4)
doc = Document()
create_styles(doc)
style = doc.styles['Normal']
style.font.name = 'Poppins'
style.font.size = Pt(11)
base_path = notebook_path.parent
for cell in nb.cells:
if cell.cell_type == 'markdown':
process_markdown_cell(doc, cell.source, base_path)
elif cell.cell_type == 'code':
process_code_cell(doc, cell.source, cell.get('outputs', []))
doc.save(output_path)
print(f'Converted: {notebook_path} -> {output_path}')
return output_path
Testing the skill
OpenClaw snapshots its skill list at session start, but a built-in file watcher picks up new SKILL.md files within about 250ms. If the skill doesn't appear, restart the session.
Here's the notebook we'll use as a test:

Open your Telegram chat with OpenClaw and type /notebook-to-docx, then tell it which notebook to convert:

The resulting Word document:

Headings, code blocks, inline formatting, and hyperlinks all land in the right Word styles. If something looks off in your output, check that the features list in your SKILL.md matches what the script supports.
OpenClaw Security and Sandboxing
OpenClaw can run tool execution inside Docker containers, which limits what a misbehaving or compromised skill can touch on your machine. The setting lives in agents.defaults.sandbox inside ~/.openclaw/openclaw.json, and there are three modes to choose from:
"off"is the default, with tools running directly on the host and no isolation layer."non-main"keeps your primary chat session on the host but moves background and automated sessions into containers.- With
"all", every session runs inside a container regardless of context.
On top of the mode, you choose a workspace access level that decides how much of your filesystem the container sees. The default, "none", gives the sandbox its own isolated directory under ~/.openclaw/sandboxes with no access to your project files.
With "ro", your workspace is mounted read-only at /agent, so the agent can read your code but not change anything. "rw" goes further and grants full read-write access at /workspace.
A working configuration that sandboxes background sessions while giving them write access looks like this:
{
"agents": {
"defaults": {
"sandbox": {
"mode": "non-main",
"scope": "session",
"workspaceAccess": "rw"
}
}
}
}

This becomes relevant the moment your skills start calling APIs or handling credentials.
When a skill runs inside a container, environment variables from the host aren't there automatically. A REPLICATE_API_TOKEN you exported in .bashrc won't exist inside the sandbox, so secrets need to go through OpenClaw's config system instead, which is what we'll set up in the next section.
Tip: If your skill uses requires.bins in its metadata to gate on a CLI tool, that check runs on the host at load time. But when the agent is sandboxed, the binary also needs to exist inside the container. Install it via sandbox.docker.setupCommand or bake it into a custom Docker image.
Sandboxing also caps the blast radius when file operations or shell commands go wrong. A skill that accidentally runs rm -rf / hits the container filesystem rather than your actual machine, which is a decent reason to turn it on even if you trust your own code.
For more on how AI agent workflows handle safety boundaries, see AI Agent Workflows with Claude CoWork.
Building an API-Connected Skill
The second skill generates images using Google's Nano Banana Pro (Gemini 3 Pro Image) model through the Replicate API, which means wiring up credential management and environment gating on top of the SKILL.md basics.

Create the skill folder:
mkdir -p ~/.openclaw/skills/nano-banana-pro
Create ~/.openclaw/skills/nano-banana-pro/SKILL.md, starting with the frontmatter:
---
name: nano-banana-pro
description: Generate or edit images via Gemini 3 Pro Image on Replicate
user-invocable: true
metadata: {"openclaw": {"emoji": "🎨", "requires": {"env": ["REPLICATE_API_TOKEN"], "bins": ["uv"]}, "primaryEnv": "REPLICATE_API_TOKEN"}}
---
The structure matches the first skill, but the metadata field is doing more. It now includes two gates: requires.env checks that REPLICATE_API_TOKEN exists before loading the skill, and requires.bins checks for uv. If either is missing, the skill is silently skipped.
The emoji field sets an icon in the Telegram slash command list. And primaryEnv maps REPLICATE_API_TOKEN to the apiKey shortcut in the config (more on that in the credential section below).
If you want the macOS Skills UI to offer one-click installation for required binaries, add an install array to the metadata:
metadata: {"openclaw": {"requires": {"bins": ["uv"]}, "install": [{"id": "brew", "kind": "brew", "formula": "uv", "bins": ["uv"], "label": "Install uv (brew)"}]}}
On Linux, handle installation manually or via sandbox.docker.setupCommand.
Below the frontmatter, add the instruction body:
# Nano Banana Pro Image Generator
Generate and edit images using Google's Nano Banana Pro model via the Replicate API.
## Usage
Run the generation script:
uv run --with replicate python {baseDir}/generate.py --prompt "<user prompt>" [--aspect-ratio 1:1] [--output image.png]
## Options
- --prompt: The image description (required)
- --aspect-ratio: Ratio like 1:1, 4:3, 16:9 (default: 1:1)
- --output: Output file path (default: generated_image.png)
## Tips
- For text in images, be specific about fonts, size, and placement
- The model supports resolutions up to 2K
- Safety filtering is on by default
The body is shorter than the first skill's since the generation script handles most of the complexity. The {baseDir} template variable works the same way, resolving to the skill folder at runtime.
The generation script
Add ~/.openclaw/skills/nano-banana-pro/generate.py:
import replicate
import urllib.request
import argparse
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--prompt", required=True)
parser.add_argument("--aspect-ratio", default="1:1")
parser.add_argument("--output", default="generated_image.png")
args = parser.parse_args()
output = replicate.run(
"google/nano-banana-pro",
input={
"prompt": args.prompt,
"aspect_ratio": args.aspect_ratio,
"output_format": "png",
"safety_filter_level": "block_only_high",
},
)
# Replicate returns a FileOutput; download the image
url = str(output[0]) if isinstance(output, list) else str(output)
urllib.request.urlretrieve(url, args.output)
print(f"Image saved to {args.output}")
if __name__ == "__main__":
main()
It parses the arguments, calls replicate.run() with the model name and input parameters, and downloads the resulting image. The replicate library reads REPLICATE_API_TOKEN from the environment automatically.
Configuring the API credential
Add an entry to ~/.openclaw/openclaw.json:
{
"skills": {
"entries": {
"nano-banana-pro": {
"enabled": true,
"apiKey": "r8_your_replicate_token_here",
"env": {
"REPLICATE_API_TOKEN": "r8_your_replicate_token_here"
}
}
}
}
}
There are two ways to supply the credential here. The apiKey field is a shortcut that maps to whatever primaryEnv declares in the skill metadata. The env block gives you finer control, letting you inject multiple environment variables if the skill needs them.
Both approaches scope the values to the agent run. They're set when the run starts and cleared when it ends, so they don't leak into your global shell environment.
Testing
Start a new OpenClaw session and invoke the skill:
/nano_banana_pro generate a beautiful and accurate diagram of how backpropagation works
Here's the diagram the skill produced via Nano Banana Pro on Replicate:

The image came through, but getting here involved a detour. The first version of this SKILL.md had no ## Rules section, which gave the agent room to improvise. When Nano Banana Pro returned a "service unavailable" error due to high demand, the agent decided on its own to try google/nano-banana (the non-Pro variant) as a fallback and generated the image with that model instead.
From the agent's perspective, the choice made sense: complete the task by any means available. From yours, it wasn't what you asked for. The fix was adding behavioral constraints to the instruction body:
## Rules
- Only use the google/nano-banana-pro model. Never fall back to other models like google/nano-banana or any alternative. If the model is unavailable or rate-limited, report the error to the user and stop.
- After generating an image, send the image file directly in the chat. Do not just save it to the workspace silently.
The agent treats SKILL.md instructions as guidance rather than hard limits, and it will fill gaps with its own judgment. Anything you don't forbid, it may decide to try.
If a behavior matters to you, whether that's which model to use, where to send output, or whether to retry on failure, spell it out in a Rules section.
Publishing and Sharing Skills on ClawHub
ClawHub is the public registry for OpenClaw skills, free to browse and install. Publishing requires a GitHub account that's at least one week old.
Setting up the CLI
Install the ClawHub CLI globally:
npm i -g clawhub
Then authenticate:
clawhub login
This opens your browser for GitHub authentication. Once authenticated, you can search, install, and publish skills from the terminal.
Publishing your skill
To publish the image generation skill:
clawhub publish ~/.openclaw/skills/nano-banana-pro \
--slug nano-banana-pro \
--name "Nano Banana Pro" \
--version 1.0.0 \
--tags latest
The --slug is the unique identifier on ClawHub and must be unique across the entire registry. If someone else has already published a skill with that slug, the command will fail with an "only the owner can publish updates" error. In that case, pick a different slug, something like yourname-nano-banana-pro.
The --version follows semantic versioning. Each time you publish an update, bump the version number and optionally add a changelog:
clawhub publish ~/.openclaw/skills/nano-banana-pro \
--slug nano-banana-pro \
--version 1.1.0 \
--changelog "Added image editing with --image-input flag"
ClawHub keeps version history so users can audit changes and roll back if needed.
For bulk operations, clawhub sync --all scans your skills directory and publishes any new or updated skills at once:
clawhub sync --all --bump patch
Installing community skills
To install a skill someone else published:
clawhub search "calendar"
clawhub install caldav-calendar
Installed skills go into ./skills by default, which OpenClaw picks up as workspace skills on the next session.
A word about third-party skills
In January 2026, security researchers at Koi discovered 341 malicious skills on ClawHub in what became known as the ClawHavoc incident. Attackers used typosquatted skill names and fake "prerequisite" installation steps to distribute the Atomic macOS Stealer (AMOS), reverse shells, and credential exfiltration payloads.
By mid-February 2026, the count had grown to over 824 flagged skills across dozens of categories.
Before installing any community skill, read its SKILL.md and supporting files. Watch for suspicious "prerequisite" installation steps, obfuscated code, or base64-encoded commands. ClawHub auto-hides skills with three or more user reports, but new malicious skills can appear faster than moderation catches them.
Tools like Clawdex can scan your installed skills against a database of known malicious packages.
Treat third-party skills with the same caution you'd apply to any third-party code: review before you run.
Conclusion
Between the SKILL.md format, credential scoping through openclaw.json, and the ClawHub CLI, you have the full lifecycle from local automation to shared package.
Most of the work in building a new OpenClaw skill is writing clear instructions in the markdown body and deciding what to gate in the metadata. The actual code, whether it's a conversion script or an API call, stays in separate files where you can test and iterate on it independently.
To go beyond the scope of what this tutorial covered, the bundled skills in the OpenClaw repo show how the core team structures more involved workflows. The Claude Opus 4.6 overview goes deeper on how model choices affect agent behavior, and the Introduction to Claude Models course offers hands-on practice with the models behind agents like OpenClaw.
Building OpenClaw Skills FAQs
What is a SKILL.md file in OpenClaw?
SKILL.md is the single file every OpenClaw skill needs. It has YAML frontmatter that defines loading behavior (name, description, metadata gates) and a markdown body with instructions the agent follows at runtime.
Where should I put custom OpenClaw skills?
Place them in ~/.openclaw/skills/ for managed skills available across all sessions, or in <project>/skills/ for workspace-scoped skills. Workspace skills override managed ones, which override bundled skills with the same name.
How do I pass API keys to an OpenClaw skill securely?
Add the credential to the skill's entry in ~/.openclaw/openclaw.json using the apiKey shortcut or the env block. Both scope the values to the agent run and clear them when it ends, so they don't leak into your shell environment.
How do I publish an OpenClaw skill to ClawHub?
Install the ClawHub CLI with npm, authenticate via clawhub login, then run clawhub publish with your skill path, a globally unique slug, and a semantic version number. You need a GitHub account at least one week old.
What is OpenClaw's Docker sandbox and when should I use it?
The Docker sandbox runs tool execution inside containers to limit what a skill can touch on your host machine. It has three modes: off, non-main (background sessions only), and all. It's worth turning on when your skills handle credentials or run shell commands.

I am a data science content creator with over 2 years of experience and one of the largest followings on Medium. I like to write detailed articles on AI and ML with a bit of a sarcastıc style because you've got to do something to make them a bit less dull. I have produced over 130 articles and a DataCamp course to boot, with another one in the makıng. My content has been seen by over 5 million pairs of eyes, 20k of whom became followers on both Medium and LinkedIn.

