Installing OpenClaw Skills on a Hardened Raspberry Pi — What Nobody Tells You About ARM64
PublicMarch 3, 202615 min read

Installing OpenClaw Skills on a Hardened Raspberry Pi — What Nobody Tells You About ARM64

OpenClaw's skill installer assumes macOS and Homebrew. On a hardened Raspberry Pi, every dependency needs the right user, the right PATH, and the right architecture. Here's what actually works on ARM64 — including building whisper.cpp when PyTorch won't run.

Share:XLinkedIn

Part 1: Ultrasecure Openclaw Setup on Raspberry Pi

Part 2: Updating your Openclaw installation on Raspberry Pi

Part 3 (This One): Installing OpenClaw Skills on a Hardened Raspberry Pi

If you followed the previous parts of this series, your Raspberry Pi is now:

  • Locked behind Tailscale with no public ports
  • Running OpenClaw as an isolated system user under /opt/openclaw
  • Supervised by a hardened systemd service with resource limits

The next natural step is to give your AI agent real-world abilities through skills — tools that let it read Gmail, transcribe audio, generate images, manage GitHub, and more.

This is where you will fall in OPENCLAW SETUP HELL.

Because the official skill installation flow assumes you are on macOS with Homebrew. You are not. You are on a headless Raspberry Pi 4, ARM64 architecture, running Debian 13, with a hardened service user that cannot see half of what you install.

This post covers everything that actually happened — including the failures, the wrong turns, the agent diagnosing its own environment, and the hard-won lessons — so you do not have to repeat them.


The First Mistake: Trusting the Skill Installer

When you first try to install skills from the OpenClaw UI or CLI, you will see something like this (I was trying to enable whisper for offline STT, nano-banana-pro for image generation, gog for giving my agent google skills and gh for github skills) :

Install failed: openai-whisper brew not installed Homebrew is not installed.
Install failed: nano-banana-pro brew not installed install "uv" manually.
Install failed: gog brew not installed install "steipete/tap/gogcli" manually.
Install failed: github brew not installed install "gh" manually.

The immediate instinct is to install Homebrew. Do not do this.

Homebrew on a Raspberry Pi would add over a thousand unnecessary packages, bypass Debian's security model, install into /home/linuxbrew which is not ARM-optimized, and increase your attack surface with no benefit.

The real problem is that these error messages are written for macOS users. On Linux, every one of these dependencies can be installed natively — and in most cases, already can be with a single apt command.


Correcting the Mental Model: Built-In Skills vs Community Skills

Before installing anything, you need to understand how OpenClaw's skill system actually works. There are two fundamentally different types of skills, and they are managed completely differently.

  • Built-in skills ship with OpenClaw. They are already on your machine inside the npm package. You do not install them — you enable them by editing ~/.openclaw/openclaw.json and satisfying their requirements (a binary on PATH, an environment variable, or a config value). Skills like nano-banana-pro, openai-whisper, github, and gog are all built-ins.
  • Community skills live on ClawHub and are installed via the ClawHub CLI:
npm i -g clawhub
clawhub search "weather"
clawhub install <skill-slug>

The key insight is that the openclaw skills install command triggering Homebrew errors is trying to resolve binary dependencies for built-in skills — not install the skills themselves. Those binaries need to be present on your system in a location the OpenClaw service process can actually see.

Which brings us to the next problem.


Step 1: Understand the Three PATH Environments

This is the most important concept in this entire post, and the one that will waste the most of your time if you misunderstand it.

Your system has three distinct PATH environments, and they are not the same:

  • The rpi admin user's interactive shell — your normal terminal session, loaded from .bashrc and .profile.
  • The openclaw interactive shell — what you get when you run sudo -u openclaw -H bash -l. Loaded from the openclaw user's own dotfiles.
  • The openclaw systemd service process — what is actually running OpenClaw. This uses only what is explicitly defined in your service file's Environment=PATH= line.

When you install a binary and it works in your shell but OpenClaw cannot find it, the answer is almost always: wrong PATH environment.

The correct test to check what OpenClaw's service process can see is:

sudo -u openclaw -H bash -lc 'which <package_name>'

Not which <package_name> from your rpi user. That tells you nothing about what the service can access.


Step 2: Install the GitHub CLI (`gh`)

This one is easy because it is in Debian's package repository:

sudo apt update
sudo apt install gh -y

Verify it is globally accessible:

which gh
# /usr/bin/gh

Since /usr/bin is in every PATH environment, this is immediately visible to the OpenClaw service. No further action needed.


Step 3: Install `gogcli` — The ARM64 Reality Check

The gogcli is a powerful Google CLI tool covering Gmail, Calendar, Drive, Sheets, Docs, and more. The skill installer tries to download a prebuilt binary for linux-arm64 from GitHub Releases. This is where you hit your first real wall.

When you run:

curl -L https://github.com/steipete/gogcli/releases/latest/download/gogcli-linux-arm64 -o gogcli

You will see:

100     9  100     9

Nine bytes downloaded. That is not a binary. That is GitHub returning a Not Found page. There is no prebuilt ARM64 binary in the release. If you missed this and ran chmod +x gogcli && sudo mv gogcli /usr/local/bin/ anyway, you would get:

/usr/local/bin/gogcli: line 1: Not: command not found

Clean this up:

sudo rm /usr/local/bin/gogcli

Building gogcli From Source

The gogcli is written in Go, which compiles cleanly for ARM64. Install Go if you do not have it:

sudo apt install golang-go build-essential -y
go version
# should show: go version go1.xx.x linux/arm64

Clone and build:

cd /opt
sudo git clone https://github.com/steipete/gogcli.git
sudo chown -R openclaw:openclaw /opt/gogcli
sudo -u openclaw -H bash -l
cd /opt/gogcli
make
./bin/gog --help
exit

If ./bin/gog --help prints help output, install it globally:

sudo cp /opt/gogcli/bin/gog /usr/local/bin/gog
sudo chmod +x /usr/local/bin/gog

💡 The binary is named gog, not gogcli.


Step 4: Install `uv` (Python Package Manager)

The next step is to install uv. uv is a modern python package manager and is required by several skills including nano-banana-pro:

curl -LsSf https://astral.sh/uv/install.sh | sh
sudo mv ~/.local/bin/uv /usr/local/bin/
sudo chmod +x /usr/local/bin/uv
uv --version

Verify the openclaw user can see it:

sudo -u openclaw -H bash -lc 'which uv'
# /usr/local/bin/uv

Step 5: The Critical systemd PATH Fix

At this point you have installed gh, gog, and uv into /usr/local/bin. But check your service file:

sudo nano /etc/systemd/system/openclaw.service

If the Environment=PATH= line does not include /usr/local/bin and your venv, update it:

Environment=PATH=/opt/openclaw/venv/bin:/usr/local/bin:/opt/openclaw/.nvm/versions/node/v22.12.0/bin:/usr/bin:/bin

⚠️ Replace the Node version with whatever you have. Ideally you should have Node 22. Check with ls /opt/openclaw/.nvm/versions/node/.

Reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart openclaw

💡 sudo -u openclaw env | grep PATH will NOT show your service PATH. That command shows the PATH for a new sudo session, completely separate from the running systemd process. To verify the actual service environment, use sudo systemctl show openclaw --property=Environment.


Step 6: Enabling the offline whisper skill (Speech-to-Text)

For Python-based skills (like Whisper), the correct architecture on a hardened system is a dedicated virtual environment owned by the openclaw user. Do not use --break-system-packages, that will work but not recommended as a practice.

Switch to the openclaw user and create the venv:

sudo -u openclaw -H bash -l
python3 -m venv /opt/openclaw/venv
source /opt/openclaw/venv/bin/activate
pip install --upgrade pip
pip install openai-whisper
exit

Verify the binary exists:

find /opt -name whisper 2>/dev/null
# /opt/openclaw/venv/bin/whisper
# /opt/openclaw/venv/lib/python3.13/site-packages/whisper

Also install ffmpeg system-wide — it is required for audio format conversion, which the whisper wrapper will need later:

sudo apt install ffmpeg -y

Verify:

ffmpeg -version
# ffmpeg version 7.x.x ...

Since it installs into /usr/bin, it is automatically visible to the openclaw service user and the systemd process with no PATH changes needed.

⚠️ If you installed whisper as the rpi user instead of the openclaw user, it will be in /home/rpi/.local/bin/ — completely invisible to the service. Always install runtime dependencies under the user that runs the service.


Step 7: The Raspberry Pi 4 and PyTorch — A Diagnosis From the Agent Itself

Here is where the setup gets genuinely interesting. Instead of guessing at the problem, you can ask the OpenClaw agent running on your Pi to check its own environment. I sent this message to my openclaw agent:

💬"Can you check if I have the requirements for openai-whisper without an API key?"

The agent then ran a series of checks internally:

WHISPER_BIN=/opt/openclaw/venv/bin/whisper
FFMPEG_BIN=/usr/bin/ffmpeg
Illegal instruction whisper --version crashed
ModuleNotFoundError: No module named 'torch'

It then dug deeper, activating the venv directly and testing torch:

Python 3.13.5
Name: openai-whisper Version: 20250625
Name: torch Version: 2.10.0
Illegal instruction (exit code 132) ❌

The diagnosis came back clear: whisper was installed, ffmpeg was present, torch was installed — but Python itself was crashing with an illegal instruction the moment it tried to execute torch code.

This is not a PATH issue. This is not a permissions issue.

This is CPU architecture incompatibility.

Modern PyTorch ARM64 wheels on PyPI are often compiled targeting ARMv8.2+ with advanced SIMD instructions. The Raspberry Pi 4's Cortex-A72 is ARMv8.0. When PyTorch attempts to execute an instruction the CPU physically does not support, the kernel raises SIGILL.

You can confirm your hardware:

lscpu | grep Model
# Model name:    Cortex-A72
# Model:    3

uname -m
# aarch64

This basically means I have a Raspberry Pi 4 Cortex-A72 = ARMv8.0 and 64-bit architecture.

PyTorch on PyPI often requires ARMv8.2+. There is no patch. There is no workaround at the pip level. The wheel is incompatible with the silicon.

This is one of the most common hidden failure modes on Raspberry Pi setups, and it has nothing to do with your configuration being wrong.


Step 8: Building whisper.cpp — Openclaw worked entire night

Rather than abandoning local transcription entirely, you can use whisper.cpp — a C++ port of the Whisper model that is natively optimized for ARM. No PyTorch. No instruction set mismatches.

I entered this prompt to my openclaw:

💬probably my cpu is not powerful enough to run pytorch, how can I enable the openai-whisper skill using whisper.cpp

The first suggestion was a compatibility wrapper — a bash script named whisper that would call whisper-cli under the hood and pass through the expected flags.

It would work, but it is brittle. One flag mismatch and the skill breaks silently.

So instead of letting the agent take this step, I asked a follow-up question:

"Is there a better way to enable the openai-whisper skill without the API?"

I want to show yout the exact answer I received:

Yes — better than a wrapper hack:

Best approach on Pi (no API key)

Use a dedicated whisper.cpp skill (or update the existing skill) instead of
forcing whisper.cpp to pretend to be Python whisper.

Why this is better:
whisper.cpp is native/C++ and Pi-friendly, no PyTorch headaches, 
cleaner flags, predictable behavior and easier to maintain long-term

Practical options (best → worst)
Best: clone openai-whisper skill into a new local skill (e.g. whisper-cpp)
that checks for whisper-cli and uses proper whisper.cpp args.
Okay: patch current openai-whisper skill to accept whisper-cli instead of whisper.
Least clean: wrapper script named whisper (works, but brittle).

If you still want the existing skill unchanged, then yes, 
wrapper is the compatibility path. 
 
But architecturally, a separate whisper.cpp skill is the “right” way.
	
If you want, I can set that up now in your workspace:

`build/install whisper.cpp under /opt/openclaw/tools/whisper.cpp`
	
add a local skill with clean commands and test with one audio file 
end-to-end.

So, instead of faking the Python Whisper interface, building a dedicated local skill that uses whisper.cpp natively with the correct flags and a persistent model path obviously made more sense.

This was cleaner, more maintainable, and architecturally correct. So, I responded with:

💬"okay choose the best option and set it up"

The agent got to work. It immediately hit its first obstacle — cmake was not installed. Since OpenClaw runs with NoNewPrivileges=true, it cannot use sudo apt install. Instead, it installed cmake through the venv:

source /opt/openclaw/venv/bin/activate
pip install --quiet cmake
cmake version 4.2.3

It then cloned and built whisper.cpp under /opt/openclaw/tools/whisper.cpp. The cmake configuration output is where it gets architecturally interesting:

CMAKE_SYSTEM_PROCESSOR: aarch64
GGML_SYSTEM_ARCH: ARM
ARM detected flags: -mcpu=cortex-a72+crc
GGML_MACHINE_SUPPORTS_dotprod - Failed
GGML_MACHINE_SUPPORTS_i8mm - Failed
GGML_MACHINE_SUPPORTS_sve - Failed

cmake correctly detected the Cortex-A72 and disabled every instruction set extension the CPU does not support. This is exactly why whisper.cpp works where PyTorch does not — it builds specifically for your CPU instead of assuming a newer baseline.

The build ran overnight.

At some point around 3AM in the morning, watching the compile percentage crawl upward, I made a decision to just leave. I was working for past 17 hours straight, I was damn tired, I just closed the laptop and went to sleep. Either it would work or it would not.

When I woke up, the binary was sitting there:

/opt/openclaw/tools/whisper.cpp/build/bin/whisper-cli

The agent also downloaded the base.en model (142MB) and created a symlink:

/opt/openclaw/.local/bin/whisper-cli /opt/openclaw/tools/whisper.cpp/build/bin/whisper-cli

Everything worked perfectly!


Step 9: Testing Transcription — And Fixing the Audio Format Problem

With the build complete, I sent an audio message via Telegram to test transcription. The agent attempted to run it directly and got:

error: failed to read audio data as wav
error: failed to read audio file '...file_0.ogg'

The model had loaded correctly (147MB base.en model, 386ms load time), but whisper.cpp expected WAV format and the Telegram audio was OGG/Opus. The solution was a conversion step via ffmpeg — which is already installed and was performed successfully by openclaw.

After converting the OGG to 16kHz mono WAV:

this is a test audio I just want to check whether you can transcribe
this file or not if you can transcribe it say yes you understood the
contents of this audio

Transcription worked.


Step 10: Creating the Permanent Whisper Wrapper

A one-off conversion is not a real solution. You want the whisper command to simply work regardless of what audio format the agent receives. Since the conversion had failed, the openclaw was proactive and asked me:

🤖 If you want, I’ll now add a permanent wrapper so your whisper command auto-converts non-WAV files before transcription.

Of course, I wanted that and hence I responded:

💬yes do that

The agent read the openai-whisper skill's SKILL.md to understand the expected interface:

requires: { bins: ["whisper"] }

It then created a wrapper script at /opt/openclaw/venv/bin/whisper that:

  • Accepts the same flags as the Python Whisper CLI (--model, --output_dir, --output_format, --task, --language, --threads)
  • Detects non-WAV input files and converts them via ffmpeg to 16kHz mono WAV automatically
  • Calls the actual whisper-cli binary from the whisper.cpp build
  • Backs up the original broken Python whisper entrypoint to whisper.pytorch.bak

The agent tested the wrapper against the original OGG file:

whisper-wrapper 0.1 (backend: whisper.cpp)
TEST_OK=1

The openai-whisper skill now has a working whisper binary on PATH, backed by whisper.cpp, using the base.en model, with automatic audio format conversion. No PyTorch. No illegal instructions.

💡 The default model is ggml-base.en.bin (142MB, good balance of speed and accuracy on Pi 4). You can pass --model tiny.en for faster results or --model small.en for better accuracy.


Step 11: Teaching the Agent Its Own Environment

One of the more practically valuable things you can do at this point is have the agent detect its own runtime environment and codify the correct dependency management approach. This prevents the agent from attempting brew installs in the future.

Send via Telegram/Gateway chat:

💬"Since my openclaw is not installed on mac but on a raspberry pi in an isolated environment, I do not want to use brew as the default manager for handling skill dependencies. Detect my system, check your permissions and choose a default manager for handling the skill dependencies."

The agent ran a full environment probe:

OS=Debian GNU/Linux 13 (trixie)
Linux raspberrypi 6.12.62+rpt-rpi-v8 aarch64
whoami: openclaw
APT=1   BREW=0   PIP=1   UV=1
SUDO_NOPASS=0
Python 3.13.5

Key finding: SUDO_NOPASS=0 — the agent confirmed it cannot use sudo, which is correct given NoNewPrivileges=true in the service file. It then selected:

  • Primary: uv pip inside /opt/openclaw/venv
  • Fallback: pip inside /opt/openclaw/venv
  • Non-Python binaries: user-space installs under /opt/openclaw/tools with wrappers in /opt/openclaw/venv/bin or /opt/openclaw/.local/bin

It wrote this policy into TOOLS.md inside the workspace:

TOOLS.md → "Dependency Manager Defaults (Raspberry Pi OpenClaw)"

The agent now has an explicit, persisted record of what it is running on, what permissions it has, and how to handle dependency installation. This makes future skill setups faster and stops it from attempting brew calls.


Step 12: Installing the `summarize` Skill

With the dependency management policy in place, the next skill to enable was summarize — a CLI that can summarize URLs, local files, and YouTube links. The skills check showed:

🧾 summarize (bins: summarize)

The agent read the skill's SKILL.md and found the required binary is summarize. It ran an npm registry check to find the correct package:

npm view summarize old unrelated package (v1.5.0)
npm view @steipete/summarize correct one (v0.11.1) ✅
bin = { summarize: 'dist/cli.js' }

It then installed it using npm — which is within its permitted tools since npm runs under the openclaw user's NVM environment:

npm install -g @steipete/summarize

added 381 packages in 1m
/opt/openclaw/.nvm/versions/node/v22.22.0/bin/summarize

Verification:

command -v summarize
# /opt/openclaw/.nvm/versions/node/v22.22.0/bin/summarize ✅

Step 13: Verifying the Final Skill Status

After all of the above, the agent ran openclaw doctor and openclaw skills check to confirm the current state. The eligible skills count had grown from the original 7 to 9:

Skills Status Check

Total: 51
✓ Eligible: 9
⏸ Disabled: 1
✗ Missing requirements: 41

Ready to use:
📦 gh-issues
🐙 github
🎮 gog
📦 healthcheck
🎙️ openai-whisper      ← now working via whisper.cpp
📦 skill-creator
🧾 summarize           ← just installed
🎞️ video-frames
🌤️ weather

The doctor output also surfaced a few additional operational notes worth keeping in mind:

Performance hint: NODE_COMPILE_CACHE is not set. On a Pi, repeated CLI runs are slower without it. Add to your shell:

export NODE_COMPILE_CACHE=/var/tmp/openclaw-compile-cache
mkdir -p /var/tmp/openclaw-compile-cache
export OPENCLAW_NO_RESPAWN=1

Storage warning: OpenClaw's state directory is on the SD card (/dev/mmcblk0p2). SD cards wear faster under session and log churn. If you are running this long-term, moving OPENCLAW_STATE_DIR to a USB SSD will improve both speed and longevity.


Enable nano-banana-pro (The Right Way)

With uv already installed and on PATH, nano-banana-pro only needs one thing — a Gemini API key. The skill is built-in and requires no npm or pip install at all.

Switch to the openclaw user and edit the config:

sudo -u openclaw -H bash -l
nano ~/.openclaw/openclaw.json

Add or merge the skills block:

{
  "skills": {
    "install": {
      "preferBrew": false,
      "nodeManager": "npm"
    },
    "entries": {
      "nano-banana-pro": {
        "enabled": true,
        "apiKey": "YOUR_GEMINI_API_KEY"
      }
    }
  }
}

💡 Setting preferBrew: false permanently tells OpenClaw not to attempt Homebrew for dependency resolution. This is a one-time fix that prevents the brew error messages from appearing on every skill install.

Start a new session after saving — skills are snapshotted at session start, not mid-session. Or restart the gateway:

sudo systemctl restart openclaw

The Architecture That Actually Works on Pi

After going through all of this, here is the clean mental model for running skills on a hardened Raspberry Pi:

APT packages (gh, ffmpeg, build-essential) — system-wide, visible to everyone, managed by Debian. Use for stable, well-packaged tools.

Manual binaries (gog, uv) — compiled or downloaded, placed in /usr/local/bin, owned by root, executable by all. Use for tools not in apt.

Python venv (/opt/openclaw/venv) — owned by the openclaw user, added to systemd PATH explicitly. Use for all Python-based skill dependencies.

Node (via NVM) — managed by the openclaw user's NVM install, added to systemd PATH explicitly. Used by OpenClaw itself and npm-installed tools like summarize.

Skills config (~/.openclaw/openclaw.json) — enables built-in skills and provides API keys. Not about installing binaries — about declaring what is available and setting preferBrew: false for Linux.

None of these paths should be mixed. Each layer has a clear responsibility.


What the Skills Status Output Is Actually Telling You

When you run openclaw doctor and see:

🍌 nano-banana-pro (env: GEMINI_API_KEY)
🎙️ openai-whisper (bins: whisper)
🧾 summarize (bins: summarize)

These are not errors. They are eligibility gates. The skill system is telling you exactly what is missing. Satisfy the gate, start a new session, and the skill becomes eligible. There is no separate install step for built-in skills.

The pattern of errors from OpenClaw's installer (brew not installed) is the system trying to auto-resolve binary gates using the macOS default method. On Linux, you resolve them manually — and once resolved, the skill gates pass and the skills activate.


The Agent as a Diagnostic Tool

One thing this entire process demonstrated clearly: your OpenClaw agent is capable of diagnosing and fixing its own environment, within the boundaries of its permissions.

When asked to check whisper eligibility, it ran find /opt -name whisper, tested torch import, identified the Illegal instruction error, researched the whisper.cpp alternative, built it from source using cmake installed via pip (since it could not use apt or sudo), created a compatibility wrapper, and confirmed transcription worked — all from a Telegram conversation.

When asked to check its own environment for dependency management, it detected the OS, arch, available tools, and sudo permissions, then wrote the policy to its workspace so it does not have to re-detect every time.

This is what the phrase "self-modifying agent" actually means in practice. Not magic — just an agent with enough access to inspect and improve its own operational context, within the boundaries of a least-privilege sandbox.


Summary: The ARM64 Skill Stack That Works

SkillDependencyInstallation MethodStatus
githubgh binarysudo apt install gh
goggog binaryBuild from source (Go)
nano-banana-prouv + Gemini API keycurl astral.sh/uv + config
openai-whisperwhisper binarywhisper.cpp build + wrapper
summarizesummarize binarynpm install -g @steipete/summarize
openai-image-genOpenAI API keyConfig only
openai-whisper-apiOpenAI API keyConfig only
notionNotion API keyConfig only

Final Thought

The skill system is not complicated once you understand the underlying model. Built-in skills need configuration, not installation. Binary dependencies need to live on the service process PATH, not your user's PATH. Python dependencies need to live in a venv owned by the service user. Some things — like full PyTorch on a Cortex-A72 — simply do not work, and recognizing that early saves hours.

The most unexpected lesson from this process was not technical. It was that the agent you are setting up can participate in its own setup. It diagnosed its own torch incompatibility. It built its own transcription backend. It detected its own environment and wrote its own tool policy. The hardened sandbox you spent so much effort building is not just a security boundary — it is also the boundary within which the agent learns to operate reliably.

The best architecture is not the one with the most features enabled — it is the one where every enabled feature actually works.


Share:XLinkedIn

Related Articles

Comments

Sign in to join the conversation.

No comments yet. Be the first to share your thoughts!