OpenClaw is a self-hosted gateway that connects your messaging apps to AI coding agents. You run a single process on your own hardware, and it becomes the bridge between Telegram, Discord, WhatsApp, Signal, and an always-available AI assistant that can read files, run commands, browse the web, and remember context across conversations.

The appeal is straightforward: your conversations stay on your hardware, you control what the agent can access, and you pay only for the API calls to your model provider. No monthly SaaS fee. No third-party hosting service reading your messages.

This guide walks through deploying OpenClaw on a Linux server from scratch. We'll cover the prerequisites, installation, configuration, connecting a Telegram bot, and running everything as a systemd service that survives reboots. The commands here are tested on Ubuntu 24.04 LTS, but they apply to Debian 12+ and most modern Linux distributions with minor adjustments.

What you'll need: A Linux server (VPS, homelab, even a Raspberry Pi 4), SSH access, and about 30 minutes. An Anthropic API key or Claude subscription for the AI model. A Telegram account if you want to use the Telegram channel.

01 What OpenClaw Actually Does

Before we start installing things, it helps to understand the architecture. OpenClaw is not a chatbot wrapper. It's an agent gateway with a few properties that matter for deployment:

The Gateway itself is lightweight. It uses minimal CPU and RAM because the AI model runs remotely at Anthropic (or OpenAI, or wherever you point it). Your server handles message routing, session state, file operations, and tool execution.

02 Prerequisites

Hardware

OpenClaw is not resource-hungry. The minimum is 1 vCPU, 512 MB RAM, and 1 GB of disk. For a production setup where the agent uses its workspace actively (memory files, skills, git repos), aim for 2 vCPU, 1 GB RAM, and 5 GB SSD. A Hetzner CX22 at roughly €4.50/month handles it easily. So does a Raspberry Pi 4 with 4 GB RAM sitting under your desk.

Software

You need Node.js 22 or newer. Check what you have:

node --version
# Need v22.x.x or higher

If Node isn't installed or is outdated, install it via NodeSource:

curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs

Or use nvm if you prefer managing multiple Node versions:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash
source ~/.bashrc
nvm install 22
nvm use 22

Accounts

You'll need an Anthropic API key (or a Claude Pro/Max subscription) for the AI model. For this guide, we're connecting Telegram, so you'll also need a Telegram account to create a bot through BotFather. Both are free to set up.

03 Installation

OpenClaw has an install script that handles everything:

curl -fsSL https://openclaw.ai/install.sh | bash

This installs the openclaw CLI globally. If you'd rather use npm directly:

npm install -g openclaw@latest

Verify it worked:

openclaw --version

Now run the onboarding wizard. This creates your config file, sets up the workspace directory, and walks you through authentication:

openclaw onboard

The wizard will ask you to authenticate with your AI provider (Anthropic OAuth or API key), configure basic Gateway settings, and optionally set up channels. Follow the prompts. When it finishes, you'll have a working config at ~/.openclaw/openclaw.json and a workspace at ~/.openclaw/workspace/.

Run the doctor command to verify everything is configured correctly:

openclaw doctor

If it flags any issues, openclaw doctor --fix can resolve most of them automatically.

Docker alternative

If you prefer containers, OpenClaw ships with Docker support:

git clone https://github.com/openclaw/openclaw.git
cd openclaw
./docker-setup.sh

The setup script builds the image, runs the onboarding wizard, and starts the Gateway via Docker Compose. Your config and workspace are bind-mounted from ~/.openclaw/ on the host, so data persists across container restarts.

04 Configuration Overview

After onboarding, your config lives at ~/.openclaw/openclaw.json. It's JSON5 (comments and trailing commas are fine). Here's a minimal config that works:

// ~/.openclaw/openclaw.json
{
  agents: {
    defaults: {
      workspace: "~/.openclaw/workspace",
    },
  },
}

That's it for a bare-minimum setup. OpenClaw fills in sane defaults for everything else. But for a real deployment, you'll want to be explicit about a few things:

{
  gateway: {
    bind: "loopback",        // Only listen on 127.0.0.1
    port: 18789,
    auth: {
      mode: "token",         // Protect the Control UI
    },
  },

  agents: {
    defaults: {
      workspace: "~/.openclaw/workspace",
      model: {
        primary: "anthropic/claude-sonnet-4-5",
      },
      heartbeat: {
        every: "30m",
        target: "last",
      },
    },
  },

  session: {
    dmScope: "per-channel-peer",  // Isolate conversations per user
    reset: {
      mode: "daily",
      atHour: 4,
      idleMinutes: 120,
    },
  },

  tools: {
    profile: "messaging",
    fs: { workspaceOnly: true },
    exec: {
      security: "deny",
      ask: "always",
    },
  },
}

Some notes on what matters here:

You can edit this file directly, use openclaw config set <key> <value> from the CLI, or use the Config tab in the Control UI. The Gateway watches the file and hot-reloads changes.

05 Connecting Telegram

Telegram is the fastest channel to get working. It uses the Bot API with long polling, so you don't need a public URL or webhook endpoint.

Step 1: Create the bot

Open Telegram, search for @BotFather (look for the blue checkmark), and send /newbot. Pick a display name and a username ending in bot. BotFather gives you a token that looks like 123456789:ABCdefGHIjklMNOpqrSTUvwxYZ. Copy it.

Step 2: Add the token to your config

You have two options. Add it to the config file:

{
  channels: {
    telegram: {
      enabled: true,
      botToken: "123456789:ABCdefGHIjklMNOpqrSTUvwxYZ",
      dmPolicy: "pairing",
      groups: {
        "*": { requireMention: true },
      },
    },
  },
}

Or set it as an environment variable (better for production):

export TELEGRAM_BOT_TOKEN="123456789:ABCdefGHIjklMNOpqrSTUvwxYZ"

The dmPolicy: "pairing" setting means anyone who messages the bot gets a one-time pairing code. Until you approve that code, the bot ignores them. This is the right default for a personal assistant you're exposing on Telegram.

Step 3: Start the Gateway and pair your account

# Start or restart the gateway
openclaw gateway restart

# Send any message to your bot in Telegram
# The bot will reply with a pairing code

# Approve the pairing from your server
openclaw pairing list telegram
openclaw pairing approve telegram <CODE>

Once paired, you can send messages to your bot and the AI agent will respond. Try asking it something. It should reply within a few seconds.

Step 4: Group chat setup (optional)

If you want the bot in group chats, tell BotFather to disable privacy mode (/setprivacy → Disabled) so the bot can see all messages, or make it a group admin. Then add it to the group. With requireMention: true, it only responds when someone @mentions it in the group.

Skip the guesswork

This post covers the core deployment flow. The full OpenClaw Deployment Guide goes deeper: Tailscale mesh networking for remote access without port forwarding, multi-channel setup (Discord, Signal, WhatsApp), security hardening with the built-in audit tool, automated backups, reverse proxy configs for Caddy and nginx, and a troubleshooting section for every common failure mode.

Get the full deployment guide — $24

06 Running as a Systemd Service

Running OpenClaw in a terminal session that dies when you close your laptop is fine for testing. For production, you want a systemd service that starts on boot, restarts on failure, and writes logs to journald.

Create a dedicated user

Don't run OpenClaw as root. Create a service account:

sudo useradd -m -s /bin/bash openclaw
sudo -u openclaw bash -c 'npm install -g openclaw@latest'
sudo -u openclaw bash -c 'openclaw onboard'

Write the unit file

Create /etc/systemd/system/openclaw.service:

[Unit]
Description=OpenClaw Gateway
Documentation=https://docs.openclaw.ai
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=openclaw
Group=openclaw
WorkingDirectory=/home/openclaw
Environment=HOME=/home/openclaw
Environment=NODE_ENV=production
Environment=OPENCLAW_GATEWAY_TOKEN=your-long-random-token-here

ExecStart=/usr/bin/openclaw gateway --port 18789
Restart=always
RestartSec=10
StartLimitIntervalSec=300
StartLimitBurst=5

LimitNOFILE=65536
MemoryMax=1G

NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=false
ReadWritePaths=/home/openclaw/.openclaw
PrivateTmp=true

StandardOutput=journal
StandardError=journal
SyslogIdentifier=openclaw

[Install]
WantedBy=multi-user.target

Generate a strong token for the OPENCLAW_GATEWAY_TOKEN line:

openssl rand -hex 32

Enable and start

sudo systemctl daemon-reload
sudo systemctl enable openclaw
sudo systemctl start openclaw

# Verify it's running
sudo systemctl status openclaw

# Watch the logs
sudo journalctl -u openclaw -f

The service restarts automatically on failure (up to 5 times in 5 minutes before systemd backs off). The NoNewPrivileges, ProtectSystem=strict, and PrivateTmp directives add a layer of sandboxing.

User service alternative

If you ran openclaw onboard --install-daemon during setup, it created a user-level systemd service instead. Manage it with:

systemctl --user status openclaw-gateway
systemctl --user restart openclaw-gateway
journalctl --user -u openclaw-gateway -f

# Keep it running after logout
loginctl enable-linger $USER

07 Basic Security

A few things to lock down before you walk away from this deployment:

File permissions

chmod 700 ~/.openclaw
chmod 600 ~/.openclaw/openclaw.json
chmod 700 ~/.openclaw/credentials
chmod -R go-rwx ~/.openclaw/credentials/

Firewall

Don't expose port 18789 to the internet. The Gateway should listen on loopback only. If you need remote access, use SSH tunneling or Tailscale:

sudo apt install -y ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw enable

Tokens in environment variables

Keep sensitive tokens out of config files where possible. Use environment variables in your systemd unit or a secrets manager. Generate long random tokens:

openssl rand -hex 32

Built-in security audit

OpenClaw ships with an audit tool that checks for common misconfigurations:

openclaw security audit
openclaw security audit --fix

Run this after any config change. It catches things like overly permissive DM policies, exposed ports, and weak auth settings.

08 Verifying Your Deployment

Once the systemd service is running, verify everything end to end:

# Gateway status
openclaw gateway status

# Health check
openclaw health

# Channel connectivity
openclaw channels status --probe

# Full diagnostic
openclaw doctor

Send a message to your Telegram bot. If the agent responds with context-aware answers, you're done. You have a self-hosted AI assistant running on your own hardware, responding through Telegram, with persistent sessions and memory.

For ongoing monitoring, a quick health-check script in cron works well:

#!/bin/bash
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:18789/health)
if [ "$HTTP_CODE" != "200" ]; then
    sudo systemctl restart openclaw
fi

Drop that in /home/openclaw/health-check.sh, make it executable, and add a cron entry to run it every 5 minutes.

What We Didn't Cover

This guide gets you from zero to a working OpenClaw deployment with Telegram. But a production-grade setup involves more. The full OpenClaw Deployment Guide covers what we skipped:

Get the Complete OpenClaw Deployment Guide

13 chapters covering every aspect of deploying and running OpenClaw in production. From initial install through Tailscale networking, multi-channel setup, security hardening, backups, monitoring, and troubleshooting. Written for developers and homelabbers who want to get it right the first time.

Download the guide — $24