Tutorials

20 February 2026

OpenClaw Walkthrough on VESSL Cloud

Ever thought about building your own AI assistant? With Open Claw and VESSL Cloud, you can spin one up in minutes — no GPU required, just a few terminal commands.

OpenClaw Walkthrough on VESSL Cloud

OpenClaw Personal Assistant Walkthrough on VESSL Cloud

Ever thought about building your own AI assistant? With OpenClaw and VESSL Cloud, you can spin one up in minutes — no GPU required, just a few terminal commands.

Summary

  • Run OpenClaw in API mode on a CPU-only VESSL workspace.
  • Use openai/gpt-5.2 as the default model.
  • Keep outputs safe in /shared/demo/output/hooks.txt.
  • Maintain context using OpenClaw's built-in session and memory features.
  • Connect Telegram (optional) if you want a chat interface.

Who should use this guide

Setting up an AI assistant workflow is straightforward. This guide is for:

  • Non-developers who want a practical AI personal assistant.
  • Teams verifying assistant workflows (prompt → context retention → task execution → confirmation) before investing in GPU infrastructure.
  • Anyone looking for a fast, stable setup.

Capabilities and limitations

What it can do

  • Remember session intent and ask only for missing information.
  • Generate executable checklists and pre-fill details.
  • Resume unresolved tasks when configured correctly.

What it cannot do

  • Automatically complete external payments, bookings, or orders without custom tool integration and your final approval.

Safety principles

  • Always ask for your confirmation before taking irreversible actions.

Prerequisites

Before starting, prepare the following. If you are new to VESSL Cloud, see the Getting Started guide.

  • Create a VESSL workspace: Select CPU Only. Set the workspace volume to /root, the shared volume to /shared, and add custom port 18789.
Workspace setup
  • Open the JupyterLab Terminal.
Open JupyterLab on VESSL Cloud
  • Get your OpenAI (or the other LLM) API key.

Quick start: 4 steps to run OpenClaw

Run these four steps in order to start your OpenClaw gateway and access the API.

Step 1: Install packages

Install the required packages and the OpenClaw CLI. Run this only once when you create the workspace.

export PATH="$HOME/.local/bin:$PATH"
ls /etc/apt/sources.list.d/*git-lfs*.list 2>/dev/null | xargs -r sed -i 's/^deb /# deb /'
apt-get update
apt-get install -y npm python3-pip curl
command -v openclaw >/dev/null || (curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --install-method npm)

If openclaw returns a Node version error, install Node 22:

curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
export NVM_DIR="$HOME/.nvm"
. "$NVM_DIR/nvm.sh"
nvm install 22
nvm use 22
nvm alias default 22
Download OpenClaw on VESSL Cloud

Step 2: Store your API key securely

Keep your API key out of your terminal history. This step disables history, saves your key to a file, and allows subsequent steps to load it using source.

unset HISTFILE
export HISTFILE=/dev/null HISTSIZE=0 HISTFILESIZE=0
set +o history
read -s -p "OPENAI_API_KEY: " OPENAI_API_KEY; echo
set -o history
umask 077
cat > /root/.demo_secrets <<EOF2
export OPENAI_API_KEY="$OPENAI_API_KEY"
export OPENAI_MODEL="gpt-5.2"
export OPENCLAW_GATEWAY_TOKEN="demo123"
EOF2
unset OPENAI_API_KEY

Step 3: Start the OpenClaw gateway

Bind the gateway in LAN mode and expose it through VESSL's custom port 18789. This makes it accessible from outside the workspace.

source /root/.demo_secrets
openclaw config set gateway.mode local
openclaw config set gateway.bind lan
openclaw config set gateway.port 18789
openclaw config set agents.defaults.model.primary "openai/gpt-5.2"
openclaw gateway --bind lan --port 18789 --allow-unconfigured --token "$OPENCLAW_GATEWAY_TOKEN"
Note: Open the workspace's External Link for port 18789 to verify access.

Step 4: Verify setup (Smoke test)

Ensure everything works. This test sends a prompt and saves the result to /shared, allowing it to persist through Pause/Resume cycles.

source /root/.demo_secrets
python -m pip install -q openai
mkdir -p /shared/demo/output
python - <<'PY'
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
r = client.responses.create(
    model="gpt-5.2",
    input="Write 5 short blog hook lines for a cloud workspace demo. Max 12 words per line.")
text = r.output_text.strip()
path = "/shared/demo/output/hooks.txt"
with open(path, "w", encoding="utf-8") as f:
    f.write(text + "\n")
print(path)
print(text)
PY

Verify output in shared storage

Check that the output was saved successfully:

ls -lh /shared/demo/output
cat /shared/demo/output/hooks.txt
View the outputs from an OpenClaw agent

Your basic OpenClaw and VESSL Cloud setup is complete. The next sections cover how to configure it as a personal assistant.

Configure context continuity

OpenClaw maintains continuity through built-in session handling and workspace memory. Context typically breaks due to:

  • Custom relays that are stateless by design.
  • Restarted processes creating a new session path.
  • Missing session policies for multi-channel identity.

For most users, a one-time setup of workspace memory and session policy works best.

These files teach the assistant about you. Replace all <...> placeholders with your details before running the code.

WSP="$HOME/.openclaw/workspace"
mkdir -p "$WSP/memory"

cat > "$WSP/USER.md" <<'EOF2'
name: <your name>
preferred_language: <your language, e.g. English>
primary_goal: <what this assistant is for, e.g. personal assistant for scheduling and research>
EOF2

cat > "$WSP/MEMORY.md" <<'EOF2'
- <default reply language rule, e.g. Reply in English by default.>
- <key task pattern, e.g. For meeting requests, check calendar conflicts first.>
- Before irreversible actions, return:
  1) ready-to-execute checklist
  2) pre-filled details
  3) final confirmation question
- Never claim an action is complete without tool confirmation.
EOF2

Run a quick validation right after setup:

openclaw config get session.dmScope
openclaw config get session.reset.mode
openclaw hooks check

Then run /context once in chat to confirm workspace rules are loaded.

Tip: For a shopping assistant, set primary_goal: shopping assistant and add rules like "ask only for missing fields first." For a calendar manager, set primary_goal: calendar manager and add "check for scheduling conflicts before booking."

Configure session policy (Single user)

openclaw config set session.dmScope main
# Optional compatibility key (not required for most setups)
openclaw config set session.mainKey main
openclaw config set session.reset.mode idle
openclaw config set session.reset.idleMinutes 10080

main is the most stable default for a private assistant.

session.mainKey is an optional compatibility setting and is not required for most setups.

Personal-use only: if more than one person can DM the bot, switch to per-channel-peer to avoid context leakage.

Configure session policy (Multiple users)

openclaw config set session.dmScope per-channel-peer

This prevents context leakage between users when sharing the bot.

Enable auto-save and auto-start (Optional)

openclaw config set hooks.internal.enabled true
openclaw hooks enable session-memory
openclaw hooks enable boot-md
openclaw hooks check
  • session-memory: Captures a context snapshot around /new boundaries.
  • boot-md: Auto-runs BOOT.md behavior when the gateway starts.

Manage Pause and Resume

VESSL Cloud lets you Pause workspaces to reduce compute costs while preserving your files and environment. However, processes do not restart automatically when you Resume.

  • Pause: Compute stops, storage persists.
  • Resume: Compute starts, but processes do not automatically restart.

Create a helper script once to quickly restart your gateway:

cat >/root/resume_openclaw.sh <<'EOF2'
#!/usr/bin/env bash
set -euo pipefail
source /root/.demo_secrets
export PATH="$HOME/.local/bin:$PATH"

export NVM_DIR="${NVM_DIR:-$HOME/.nvm}"
if [ -s "$NVM_DIR/nvm.sh" ]; then
  . "$NVM_DIR/nvm.sh"
  nvm install 22 >/dev/null 2>&1 || true
  nvm use 22 >/dev/null
else
  echo "Warning: nvm not found at $NVM_DIR. Make sure Node 22 is active." >&2
fi

if ! command -v openclaw >/dev/null 2>&1; then
  apt-get update && apt-get install -y npm curl
  curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --install-method npm
fi
exec openclaw gateway --bind lan --port 18789 --allow-unconfigured --token "${OPENCLAW_GATEWAY_TOKEN:-demo123}"
EOF2
chmod +x /root/resume_openclaw.sh

Run this command after Resuming:

bash /root/resume_openclaw.sh

Connect Telegram (Optional)

OpenClaw works perfectly via API. If you want a chat interface, the native Telegram channel is recommended.

set +o history
read -s -p "TELEGRAM_BOT_TOKEN: " TELEGRAM_BOT_TOKEN; echo
set -o history

TG_TOKEN_FILE="$HOME/.openclaw/.telegram_bot_token"
install -d -m 700 "$(dirname "$TG_TOKEN_FILE")"
printf "%s" "$TELEGRAM_BOT_TOKEN" > "$TG_TOKEN_FILE"
chmod 600 "$TG_TOKEN_FILE"
unset TELEGRAM_BOT_TOKEN

openclaw config set channels.telegram.enabled true
openclaw config set channels.telegram.tokenFile "$TG_TOKEN_FILE"
openclaw config set channels.telegram.dmPolicy pairing

To connect:

  1. Start the gateway.
  2. Run openclaw pairing list telegram.
  3. Approve the connection with openclaw pairing approve telegram <CODE>.

Fallback: Custom Python relay (Advanced)

Use this only if the native Telegram channel is unavailable. Custom relays can lose context easily.

If you must run a custom relay, build it against the official docs below.

Default recommendation: native Telegram channel + pairing.

Troubleshooting

  1. Error: openclaw: command not found
  2. Re-run Step 1.
  3. Error: openclaw requires Node >=22.12
  4. Install Node 22 with nvm, then restart your shell.
  5. Error: OPENAI_API_KEY missing
  6. Re-run Step 2, then run source /root/.demo_secrets.
  7. Context resets after restart
  8. Verify you are using the same gateway host and workspace. Check your session reset policy.
  9. Different behavior across channels
  10. Check your dmScope and identity linking policy.
  11. Telegram replies ignore previous messages
  12. Your custom relay may be stateless. Switch to the native Telegram channel if possible.
  13. Assistant forgets preferences
  14. Ensure preferences are in MEMORY.md. Restart the gateway and run /context to confirm they loaded.

Cost comparison: VESSL Cloud vs. Local setup

This guide excludes external API usage costs. Your infrastructure cost equals workspace runtime plus attached storage. (For example, running a standard CPU at $0.30/hour for 50 hours costs about $15 per month.)

Local setup (Mac Mini, Notebook, etc.)

  • Pros: No additional cloud infrastructure costs if you already own the hardware.
  • Cons: Requires running your computer 24/7. Exposing a local server to Telegram requires manual network tunneling (e.g., Cloudflare Tunnel), which is difficult to maintain.

VESSL Cloud

  • Pros: No need to leave your personal device on. Built-in port forwarding eliminates tunneling setup. You can Pause the workspace when not in use to save money.
  • Cons: You pay a small hourly rate while the workspace is running.

Wrapping up

OpenClaw makes it easier than ever to build a personal AI assistant. As more people explore what's possible, VESSL AI is here to provide the reliable infrastructure needed to bring these ideas to life.

References

Wayne Kim

Wayne Kim

Product Marketer

Try VESSL Cloud today.

Try VESSL today

Build, train, and deploy models faster at scale with fully managed infrastructure, tools, and workflows.

Get Started

Recommeded posts

MLOps for high-performance ML teams

© 2026 VESSL AI, Inc. All rights reserved.