WE SHIP FASTER THAN AMAZONTHE ONLY REAL MOAT IS ATTENTIONWE'RE ALMOST AS SECURE AS FORT KNOXTHE WORLD RUNS ON LOVE & STATUSFAST, GOOD, CHEAP, PICK THREEYOU CAN TRUST US WITH YOUR DOG (WE LOVE DOGS)WE SHIP FASTER THAN AMAZONTHE ONLY REAL MOAT IS ATTENTIONWE'RE ALMOST AS SECURE AS FORT KNOXTHE WORLD RUNS ON LOVE & STATUSFAST, GOOD, CHEAP, PICK THREEYOU CAN TRUST US WITH YOUR DOG (WE LOVE DOGS)
Back to Blog

How to Deploy OpenClaw the Simple Way

A comprehensive walkthrough for deploying OpenClaw, an open-source self-hosted AI assistant, from zero to a working instance. Learn the four deployment paths and step-by-step instructions for Docker on a VPS.

State of AI

OpenClaw is an open-source, self-hosted AI assistant that connects to your messaging apps and gives an AI model direct access to tools on your machine or server. But it's powerful enough to be dangerous if set up incorrectly. Here's how to deploy it safely and simply.

What OpenClaw Actually Is

Let's start with what OpenClaw actually is, because the hype cycle around this project has been so intense that the signal is getting buried under layers of breathless Medium posts and Twitter threads.

OpenClaw is an open-source, self-hosted AI assistant. It connects to the messaging apps you already use — WhatsApp, Telegram, Discord, Slack, Signal, iMessage — and gives an AI model direct access to tools on your machine or server: your files, your browser, your terminal, your calendar, your email.

It was created by Peter Steinberger, the Austrian developer behind PSPDFKit, and first published in November 2025 under the name "Clawdbot." Within two months it got renamed twice — first to "Moltbot" after Anthropic raised trademark concerns over the "Claud" prefix, then to "OpenClaw" because, as Steinberger put it, "Moltbot never quite rolled off the tongue." The lobster mascot, Molty, stuck around.

The numbers are staggering. OpenClaw crossed 60,000 GitHub stars in 72 hours of its viral moment. By early March 2026, it had over 247,000 stars and 47,700 forks, making it one of the fastest-growing open-source repositories in GitHub history. The community has built thousands of "skills" (plugin-like capability modules), and the project ships updates at a pace that borders on reckless: multiple releases per week, each one stuffed with security fixes, new model providers, and expanded integrations.

But Here Is the Part Nobody Talks About Loudly Enough

OpenClaw is dangerous if you do not set it up correctly. One of the project's own maintainers, a contributor known as Shadow, warned on Discord that "if you can't understand how to run a command line, this is far too dangerous of a project for you to use safely." In March 2026, Chinese authorities restricted state-run enterprises and government agencies from running OpenClaw on office computers because of the security surface it exposes.

This is not a toy. It is a powerful, autonomous system that can run shell commands, control your browser, read and write your files, and act on your behalf across the internet. The deployment choices you make are not just about convenience — they are about security boundaries.

So let's get it right.

If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely.

Shadow, OpenClaw maintainer (Discord)

Understanding What You Are Actually Deploying

Before touching a single command, it helps to understand what OpenClaw is made of at a systems level. Skip this section if you just want the steps. Come back to it when something breaks at 2 AM and you need to know which layer is misbehaving.

OpenClaw is, at its core, a long-running Node.js service. It operates as a gateway sitting between three things:

  1. Messaging platforms — WhatsApp, Telegram, Discord, Slack, and others — where you send instructions and receive responses.
  2. AI model providers — Anthropic's Claude, OpenAI's GPT models, Google's Gemini, DeepSeek, Ollama for local models, and many more — that provide the intelligence.
  3. Local tools and capabilities — file system access, shell execution, browser automation, calendar integration, email, cron jobs, webhooks — that let it actually do things in the world.

The gateway listens on port 18789 by default and serves the "Control UI," a browser-based dashboard for chat, configuration, session management, and monitoring.

Three Conceptual Layers

When you deploy OpenClaw, you are deploying three conceptual layers:

  • The CLI and runtime layer launches and manages the assistant.
  • The configuration and onboarding layer is where you select model providers, connect messaging channels, and set up integrations.
  • The persistence and execution context layer determines whether OpenClaw runs on your laptop, inside a Docker container, on a VPS, or in a managed cloud environment — and this is where most of the security decisions live.

Configuration data, conversation history, and "memory" are all stored locally as files (largely Markdown and JSON) in a .openclaw directory. This is one of OpenClaw's defining characteristics: it is local-first. Your data stays on your machine unless you explicitly connect it to external services.

The Four Deployment Paths

01

Managed Cloud (One-Click)

Best for: People who want OpenClaw running in five minutes with zero server management.

DigitalOcean, Hostinger, Zeabur, Kimi Claw, and Alibaba Cloud all offer one-click deployments. These handle firewall rules, non-root execution, Docker isolation, and authenticated communication out of the box. The tradeoff is trusting a third party's infrastructure and their security hardening.

02

Docker on Your Machine

Best for: Developers who want isolation without managing a remote server.

Running OpenClaw in Docker gives you a security boundary between the AI agent and your host OS. The container can access a workspace directory you define, but it cannot easily reach the rest of your filesystem. This is the approach Simon Willison took, noting he was not "brave enough" to run OpenClaw directly on his Mac.

03

Docker on a VPS

Best for: Developers who want always-on availability with good security isolation.

This is the sweet spot for most serious users. A $5–$10/month VPS from DigitalOcean, Hetzner, or Contabo running Docker gives you 24/7 uptime, isolation from your personal machine, and full control over the environment. Your only ongoing cost beyond the server is API usage.

04

Bare Metal (Direct Install)

Best for: People who know exactly what they are doing and want maximum control.

Install Node.js, run npx openclaw, walk through the onboarding wizard, and you are live. This is the fastest path to a working instance but the least secure — the AI agent has the same filesystem and network access as your user account.

Each path trades control against convenience. Pick the one that matches your comfort level and security requirements.

Docker on a VPS: Step-by-Step

Path 3 hits the best balance of simplicity, security, and reliability. Follow these steps and you will have a working OpenClaw instance, accessible from your phone via Telegram, running on a cloud server that costs less than a large coffee per month.

Step 1: Get a Server

You need a VPS running Ubuntu 22.04 or 24.04 with at least 2 GB of RAM (4 GB preferred), 10 GB of disk space, and root or sudo access. Any provider works:

  • DigitalOcean — $6/month for a basic Droplet
  • Hetzner — Excellent value in Europe
  • Contabo — Budget-friendly with generous specs
  • Hostinger — Offers a preconfigured OpenClaw template

Create an Ubuntu server, note the IP address, and confirm you can SSH into it.

Step 2: Secure the Server (Do Not Skip This)

An unsecured VPS running an AI agent with shell access is a recipe for a very bad week. SSH in and run:

# Update everything
sudo apt update && sudo apt upgrade -y

# Create a non-root user
sudo adduser openclaw
sudo usermod -aG sudo openclaw

# Set up a basic firewall
sudo apt install ufw -y
sudo ufw allow OpenSSH
sudo ufw allow 18789/tcp   # OpenClaw Control UI
sudo ufw enable

# Switch to your new user
su - openclaw

If you plan to connect a domain and use HTTPS (recommended for any production-ish setup), install Nginx and Certbot after OpenClaw is running.

Step 3: Install Docker

# Install Docker
sudo apt install docker.io docker-compose-v2 -y

# Add your user to the docker group
sudo usermod -aG docker $USER

# Log out and back in for the group change to take effect
exit
# SSH back in as your openclaw user

Verify Docker is working:

docker --version
docker compose version

Step 4: Clone OpenClaw and Run the Setup Script

# Install git
sudo apt install git -y

# Clone the repository
git clone https://github.com/openclaw/openclaw.git

# Navigate into the directory
cd openclaw

# Run the Docker setup script
bash scripts/docker/setup.sh

The setup script automatically builds the OpenClaw Docker image, runs the onboarding wizard inside a container, generates a gateway token for the Control UI, and creates the necessary directories and Docker Compose configuration.

Step 5: Walk Through the Onboarding Wizard

The wizard is interactive and takes about two minutes. It will ask you to:

  1. Choose a model provider. Anthropic (Claude) and OpenAI (GPT) are the most common choices. You can also use Google Gemini, DeepSeek, Mistral, or run local models through Ollama. Have your API key ready.
  2. Enter your API key. This is how OpenClaw pays for the intelligence it uses.
  3. Configure messaging channels (optional but recommended). Connect WhatsApp, Telegram, Discord, or other platforms. Telegram is by far the easiest.
  4. Set up device pairing. This ensures only you can talk to your instance.

Step 6: Connect Telegram

Telegram is the path of least resistance for getting OpenClaw onto your phone:

  1. Open Telegram and search for @BotFather.
  2. Send /newbot and follow the prompts. Give it a name and username.
  3. BotFather will give you a token. Copy it.
  4. Back in your SSH session, run:
docker compose run --rm openclaw-cli channels add --channel telegram --token "YOUR_BOT_TOKEN"
  1. Send a message to your new bot in Telegram. OpenClaw will send back a pairing code.
  2. Approve the pairing:
docker compose run --rm openclaw-cli pairing approve telegram YOUR_PAIRING_CODE

You can now message your AI assistant from your phone, anywhere in the world.

Step 7: Start the Gateway

docker compose up -d openclaw-gateway

The -d flag runs it in the background. Your OpenClaw instance is now live and will restart automatically if the server reboots (the setup script configures restart: unless-stopped in Docker Compose).

Step 8: Access the Control UI

Open a browser and navigate to:

http://YOUR_SERVER_IP:18789

You will be prompted to log in with your gateway token. If you lost the token during setup, retrieve it with:

docker compose run --rm openclaw-cli dashboard --no-open

The Control UI gives you a web-based chat interface, configuration options, session management, and monitoring tools. It is where you go to install skills, tweak settings, and watch what your agent is doing.

Prefer a one-click deploy?

If the step-by-step above felt like too many steps, several providers have collapsed the entire process into a single button click.

  • DigitalOcean Marketplace — Search for OpenClaw, choose a Droplet size, and deploy. The image comes preconfigured with Docker isolation, firewall hardening, non-root execution, and authenticated communication.
  • Hostinger VPS Template — Installs OpenClaw automatically with all Docker dependencies. Optionally add nexos.ai credits to connect multiple LLM providers without setting up individual API keys.
  • Zeabur Template — Preconfigured and ready to use. Choose "Docker image" as the source, fill in your environment variables, and deploy. Uses a default model from Zeabur AI Hub with a failover chain.

Running OpenClaw Locally in Docker

If you want to experiment before committing to a cloud server, running OpenClaw in Docker on your own machine is the safest local option.

Prerequisites

  • Docker Desktop installed
  • An API key from Anthropic, OpenAI, or another supported provider
  • A terminal

Steps

# Clone the repo
git clone https://github.com/openclaw/openclaw.git
cd openclaw

# Run the setup script
bash scripts/docker/setup.sh

The script creates two directories on your machine:

  • ~/.openclaw — holds configuration, memory, API keys, and agent settings.
  • ~/openclaw/workspace — the working directory accessible to the agent. Files the agent creates appear here.

Walk through the onboarding wizard, connect a messaging channel if you want, and start the gateway:

docker compose up -d openclaw-gateway

Access the Control UI at http://localhost:18789.

Important Notes for Local Docker

  • The container mounts docker.sock only if sandbox prerequisites pass. If sandbox setup fails, the script disables sandboxing automatically.
  • The image runs as user node (uid 1000). If you see permission errors on /home/node/.openclaw, make sure your host bind mounts are owned by uid 1000.
  • Administrative commands must be run from the directory containing your docker-compose.yml:
docker compose run --rm openclaw-cli <command>

Post-Deployment: The Things That Actually Matter

Getting OpenClaw running is step one. Here is what to do immediately after deployment.

Install Skills

Skills are OpenClaw's plugin system. Each skill is a directory containing a SKILL.md file with metadata and instructions. Skills can be bundled with the software, installed from the community hub (ClawHub), or written by you and stored in your workspace.

The community has built thousands of skills covering everything from web browsing and code execution to calendar management and smart home control. Browse what is available through the Control UI and install what fits your workflow.

Security note: Cisco's AI security research team tested a third-party OpenClaw skill and found it performing data exfiltration and prompt injection without user awareness. The skill repository does not have robust vetting. Review skills before installing them.

Set Up Cron Jobs and Automations

One of OpenClaw's most compelling features is its heartbeat daemon, which can act without being prompted. You can set up cron jobs in natural language — "every Monday morning, summarize my unread emails and post the summary to my Telegram" — and OpenClaw will execute them on schedule.

This is also where the security risk gets real. An always-on agent with cron access can do a lot of damage if compromised. Be thoughtful about what you automate.

Configure Memory

OpenClaw maintains persistent memory across sessions, stored as Markdown files on disk. This is what allows it to remember your preferences, project context, and past conversations. Over time, this memory becomes the most valuable part of your instance.

Back it up regularly. The ~/.openclaw/data directory is your source of truth for session memory and conversation history. A simple cron job to rsync it to an S3-compatible bucket is sufficient for most setups.

Harden Security

If your instance is exposed to the internet, take these additional steps:

  • Set OPENCLAW_GATEWAY_BIND=loopback if you are accessing the Control UI through a reverse proxy like Nginx.
  • Use dangerouslyDisableDeviceAuth only if you are replacing device pairing with gateway token authentication for cloud deployments. The name is not subtle. Know what you are doing.
  • Configure allowedOrigins in your openclaw.json to restrict which domains can access the Control UI.
  • Keep the gateway token secret. Treat it like a password. Anyone with this token can control your agent.

Updating OpenClaw

OpenClaw releases frequently — sometimes multiple times per week. To update a Docker deployment:

cd openclaw
docker compose pull          # Pull the latest image
docker compose up -d         # Recreate container with the new image
docker image prune -f        # Clean up old images

Your configuration and data persist in the mounted volumes, so updates are non-destructive. Check the GitHub releases page before upgrading on production — breaking changes are rare but do happen, usually around skill API changes or security hardening that tightens previously permissive defaults.

For the CLI on bare metal:

npm update -g openclaw
sudo systemctl restart openclaw

Troubleshooting Common Issues

If something goes wrong after deployment, work through these known issues before digging deeper.

  1. "Gateway failed to start: non-loopback Control UI requires gateway.controlUi.allowedOrigins"

    This error surfaced prominently starting with version 2026.2.25. Edit ~/.openclaw/openclaw.json and add:

    {
      "gateway": {
        "controlUi": {
          "allowedOrigins": [
            "http://127.0.0.1:18789",
            "http://localhost:18789"
          ]
        }
      }
    }
    

    Then restart the service.

  2. Container crashes immediately

    Check logs with docker logs openclaw. The most common causes are missing API keys, insufficient memory, or port conflicts on 18789.

  3. WhatsApp QR code not showing

    Run onboarding in interactive mode:

    docker compose run --rm openclaw-cli onboard
    

    The QR code needs a terminal that supports rendering. If you are in a minimal SSH session, try increasing your terminal width.

  4. Permission errors on mounted volumes

    Ensure the host directories exist and are owned by your user:

    mkdir -p ~/.openclaw ~/openclaw/workspace
    

    If using the Docker image's default node user (uid 1000), match bind mount ownership:

    sudo chown -R 1000:1000 ~/.openclaw ~/openclaw/workspace
    
  5. High memory usage

    Set a memory limit in your docker-compose.yml to prevent the container from consuming your entire server's RAM:

    mem_limit: 2g
    

Costs and the Bigger Picture

What Does This Actually Cost?

Running OpenClaw itself is free — it is MIT-licensed open source. Your costs come from two places:

  1. Server hosting: A basic VPS costs $5 to $10 per month. A DigitalOcean Droplet at $6/month or a Hetzner instance handles typical personal use comfortably.
  2. AI API usage: This varies depending on how much you use it and which models you choose. Claude and GPT-4 class models are more expensive per token than smaller models. If you connect a local model through Ollama, this cost drops to zero — but you need beefier hardware.

For a solo user doing moderate automation (email triage, scheduling, code review, web research), expect somewhere between $10 and $30 per month in API costs on top of hosting.

The Bigger Picture

OpenClaw represents something genuinely new. It is not just a chatbot you talk to. It is an autonomous agent that runs on your own infrastructure, remembers your context across sessions, takes initiative on a schedule, and can be extended with community-built skills to do almost anything a human can do at a computer.

The project is moving at an extraordinary pace. Peter Steinberger announced in February 2026 that he would be joining OpenAI and that the project would be moved to an open-source foundation. The community has already spawned derivatives like IronClaw (a Rust reimplementation focused on security from the NEAR AI team) and Moltbook (a social network where AI agents interact autonomously).

The architecture is smart: local-first data storage, a gateway that cleanly separates concerns, a skill system that makes capabilities portable and composable, and model-agnostic design that lets you switch between providers without reconfiguring your workflows.

But with great power comes the potential for spectacular failure. OpenClaw can run shell commands, control browsers, send emails, and make API calls. A prompt injection attack, a malicious skill, or a misconfigured permission could have real consequences. The security surface is enormous, and the project, for all its velocity, is still young.

Deploy it. Experiment with it. Build with it. But treat it like what it is: an administrative system with root-level ambitions, not a cute chatbot with a lobster emoji.

Back up your memory directory

The ~/.openclaw/data directory holds your session memory, conversation history, and agent context. It is the most valuable part of a mature OpenClaw instance — and it is easy to lose in a server migration or accidental volume wipe.

Set up a regular rsync job to an S3-compatible bucket before you start relying on OpenClaw for anything important.

Build with Octopus Builds

Need help turning the article into an actual system?

We design the operating model, product surface, and delivery plan behind AI systems that need to ship cleanly and keep working in production.

Start a conversationExplore capabilities

Up next

The Year the Machines Learned to Do Things: The State of AI Agents in 2026

AI agents have moved from conference hype to production reality in 2026, but the gap between capability and reliable deployment remains wide. Here's what's actually working, what's still broken, and what comes next.

Read next article