20 February 2026
Ever thought about building your own AI assistant? With Open Claw and VESSL Cloud, you can spin one up in minutes — no GPU required, just a few terminal commands.

Ever thought about building your own AI assistant? With OpenClaw and VESSL Cloud, you can spin one up in minutes — no GPU required, just a few terminal commands.
openai/gpt-5.2 as the default model./shared/demo/output/hooks.txt.Setting up an AI assistant workflow is straightforward. This guide is for:
What it can do
What it cannot do
Safety principles
Before starting, prepare the following. If you are new to VESSL Cloud, see the Getting Started guide.
CPU Only. Set the workspace volume to /root, the shared volume to /shared, and add custom port 18789.

Run these four steps in order to start your OpenClaw gateway and access the API.
Install the required packages and the OpenClaw CLI. Run this only once when you create the workspace.
export PATH="$HOME/.local/bin:$PATH"
ls /etc/apt/sources.list.d/*git-lfs*.list 2>/dev/null | xargs -r sed -i 's/^deb /# deb /'
apt-get update
apt-get install -y npm python3-pip curl
command -v openclaw >/dev/null || (curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --install-method npm)If openclaw returns a Node version error, install Node 22:
curl -fsSL https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
export NVM_DIR="$HOME/.nvm"
. "$NVM_DIR/nvm.sh"
nvm install 22
nvm use 22
nvm alias default 22
Keep your API key out of your terminal history. This step disables history, saves your key to a file, and allows subsequent steps to load it using source.
unset HISTFILE
export HISTFILE=/dev/null HISTSIZE=0 HISTFILESIZE=0
set +o history
read -s -p "OPENAI_API_KEY: " OPENAI_API_KEY; echo
set -o history
umask 077
cat > /root/.demo_secrets <<EOF2
export OPENAI_API_KEY="$OPENAI_API_KEY"
export OPENAI_MODEL="gpt-5.2"
export OPENCLAW_GATEWAY_TOKEN="demo123"
EOF2
unset OPENAI_API_KEYBind the gateway in LAN mode and expose it through VESSL's custom port 18789. This makes it accessible from outside the workspace.
source /root/.demo_secrets
openclaw config set gateway.mode local
openclaw config set gateway.bind lan
openclaw config set gateway.port 18789
openclaw config set agents.defaults.model.primary "openai/gpt-5.2"
openclaw gateway --bind lan --port 18789 --allow-unconfigured --token "$OPENCLAW_GATEWAY_TOKEN"Note: Open the workspace's External Link for port 18789 to verify access.Ensure everything works. This test sends a prompt and saves the result to /shared, allowing it to persist through Pause/Resume cycles.
source /root/.demo_secrets
python -m pip install -q openai
mkdir -p /shared/demo/output
python - <<'PY'
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
r = client.responses.create(
model="gpt-5.2",
input="Write 5 short blog hook lines for a cloud workspace demo. Max 12 words per line.")
text = r.output_text.strip()
path = "/shared/demo/output/hooks.txt"
with open(path, "w", encoding="utf-8") as f:
f.write(text + "\n")
print(path)
print(text)
PYCheck that the output was saved successfully:
ls -lh /shared/demo/output
cat /shared/demo/output/hooks.txt
Your basic OpenClaw and VESSL Cloud setup is complete. The next sections cover how to configure it as a personal assistant.
OpenClaw maintains continuity through built-in session handling and workspace memory. Context typically breaks due to:
For most users, a one-time setup of workspace memory and session policy works best.
These files teach the assistant about you. Replace all <...> placeholders with your details before running the code.
WSP="$HOME/.openclaw/workspace"
mkdir -p "$WSP/memory"
cat > "$WSP/USER.md" <<'EOF2'
name: <your name>
preferred_language: <your language, e.g. English>
primary_goal: <what this assistant is for, e.g. personal assistant for scheduling and research>
EOF2
cat > "$WSP/MEMORY.md" <<'EOF2'
- <default reply language rule, e.g. Reply in English by default.>
- <key task pattern, e.g. For meeting requests, check calendar conflicts first.>
- Before irreversible actions, return:
1) ready-to-execute checklist
2) pre-filled details
3) final confirmation question
- Never claim an action is complete without tool confirmation.
EOF2Run a quick validation right after setup:
openclaw config get session.dmScope
openclaw config get session.reset.mode
openclaw hooks checkThen run /context once in chat to confirm workspace rules are loaded.
Tip: For a shopping assistant, setprimary_goal: shopping assistantand add rules like "ask only for missing fields first." For a calendar manager, setprimary_goal: calendar managerand add "check for scheduling conflicts before booking."
openclaw config set session.dmScope main
# Optional compatibility key (not required for most setups)
openclaw config set session.mainKey main
openclaw config set session.reset.mode idle
openclaw config set session.reset.idleMinutes 10080main is the most stable default for a private assistant.
session.mainKey is an optional compatibility setting and is not required for most setups.
Personal-use only: if more than one person can DM the bot, switch to per-channel-peer to avoid context leakage.
openclaw config set session.dmScope per-channel-peerThis prevents context leakage between users when sharing the bot.
openclaw config set hooks.internal.enabled true
openclaw hooks enable session-memory
openclaw hooks enable boot-md
openclaw hooks checksession-memory: Captures a context snapshot around /new boundaries.boot-md: Auto-runs BOOT.md behavior when the gateway starts.VESSL Cloud lets you Pause workspaces to reduce compute costs while preserving your files and environment. However, processes do not restart automatically when you Resume.
Create a helper script once to quickly restart your gateway:
cat >/root/resume_openclaw.sh <<'EOF2'
#!/usr/bin/env bash
set -euo pipefail
source /root/.demo_secrets
export PATH="$HOME/.local/bin:$PATH"
export NVM_DIR="${NVM_DIR:-$HOME/.nvm}"
if [ -s "$NVM_DIR/nvm.sh" ]; then
. "$NVM_DIR/nvm.sh"
nvm install 22 >/dev/null 2>&1 || true
nvm use 22 >/dev/null
else
echo "Warning: nvm not found at $NVM_DIR. Make sure Node 22 is active." >&2
fi
if ! command -v openclaw >/dev/null 2>&1; then
apt-get update && apt-get install -y npm curl
curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -- --install-method npm
fi
exec openclaw gateway --bind lan --port 18789 --allow-unconfigured --token "${OPENCLAW_GATEWAY_TOKEN:-demo123}"
EOF2
chmod +x /root/resume_openclaw.shRun this command after Resuming:
bash /root/resume_openclaw.shOpenClaw works perfectly via API. If you want a chat interface, the native Telegram channel is recommended.
set +o history
read -s -p "TELEGRAM_BOT_TOKEN: " TELEGRAM_BOT_TOKEN; echo
set -o history
TG_TOKEN_FILE="$HOME/.openclaw/.telegram_bot_token"
install -d -m 700 "$(dirname "$TG_TOKEN_FILE")"
printf "%s" "$TELEGRAM_BOT_TOKEN" > "$TG_TOKEN_FILE"
chmod 600 "$TG_TOKEN_FILE"
unset TELEGRAM_BOT_TOKEN
openclaw config set channels.telegram.enabled true
openclaw config set channels.telegram.tokenFile "$TG_TOKEN_FILE"
openclaw config set channels.telegram.dmPolicy pairingTo connect:
openclaw pairing list telegram.openclaw pairing approve telegram <CODE>.Fallback: Custom Python relay (Advanced)
Use this only if the native Telegram channel is unavailable. Custom relays can lose context easily.
If you must run a custom relay, build it against the official docs below.
Default recommendation: native Telegram channel + pairing.
openclaw: command not foundopenclaw requires Node >=22.12nvm, then restart your shell.OPENAI_API_KEY missingsource /root/.demo_secrets.dmScope and identity linking policy.MEMORY.md. Restart the gateway and run /context to confirm they loaded.This guide excludes external API usage costs. Your infrastructure cost equals workspace runtime plus attached storage. (For example, running a standard CPU at $0.30/hour for 50 hours costs about $15 per month.)
Local setup (Mac Mini, Notebook, etc.)
VESSL Cloud
OpenClaw makes it easier than ever to build a personal AI assistant. As more people explore what's possible, VESSL AI is here to provide the reliable infrastructure needed to bring these ideas to life.

Product Marketer
Build, train, and deploy models faster at scale with fully managed infrastructure, tools, and workflows.