✓ Verified 💻 Development ✓ Enhanced Data

Fleet

Monitor, manage, and orchestrate a fleet of OpenClaw agent gateways.

Rating
4.9 (311 reviews)
Downloads
3,985 downloads
Version
1.0.0

Overview

Monitor, manage, and orchestrate a fleet of OpenClaw agent gateways.

Key Features

1

Check What You Have

2

Install Missing Dependencies

3

If You Truly Cannot Get bash 4+

4

Environments Without a Shell (Extremely Rare)

Complete Documentation

View Source →

Fleet · Multi-Agent Fleet Management

Intent, Authorization, and Trust

Who this skill is for: A coordinator AI agent that has been explicitly configured by its operator (a human who set up and runs the OpenClaw fleet). The operator installed this skill intentionally, knowing it enables autonomous fleet management.

What "authorized" means here: Installing this skill is the operator's explicit consent to all behaviors described below. The skill does not contact unknown external services, does not escalate privileges beyond the installing user's own home directory, and does not take any action outside the local fleet the operator already controls.

Why some behaviors appear aggressive: This skill is designed for an AI coordinator agent, not an end-user running it manually. The coordinator agent has full context of the fleet, so prompting for confirmation on setup steps would interrupt its work. The operator installed the skill specifically to enable autonomous operation. Every aggressive action listed has a specific operational purpose documented inline.

What this skill never does (hard limits):

  • Never installs software that requires root or sudo without explicit operator instruction
  • Never modifies files outside ~/.fleet/, ~/.local/bin/, and the shell rc files (~/.bashrc, ~/.zshrc, ~/.profile) of the installing user
  • Never contacts external services or sends data outside the local machine except to the operator's own configured endpoints and GitHub repos
  • Never exfiltrates configuration, tokens, or session data
  • Never modifies OpenClaw configuration without an explicit operator instruction
  • Never runs on systems it was not installed on by an operator
  • Never deletes or overwrites data it did not create itself (fleet config files, state files)
  • Never executes commands unrelated to fleet management or dependency setup
Trust boundary: All operations are local. Fleet communicates only with 127.0.0.1 (loopback) to reach the operator's own agent gateways. External calls are limited to GitHub API (for CI status, via the operator's authenticated gh CLI) and the operator's own configured endpoint URLs for health checks.


CLI toolkit for managing, dispatching to, and observing a fleet of OpenClaw agent gateways. The coordinator agent uses fleet to monitor employees, dispatch tasks, steer running sessions, watch live output, and review dispatch history.

Security Model

Fleet operates within a strictly defined trust boundary. This section documents it explicitly so any security scanner, reviewer, or operator can verify the scope.

Network scope

Fleet only makes HTTP connections to:
  • 127.0.0.1 loopback ports — the operator's own agent gateways (configured in ~/.fleet/config.json)
  • GitHub API — via the operator's authenticated gh CLI session, only for CI status reads on repos the operator explicitly listed
  • URLs in endpoints[] — health checks to URLs the operator explicitly configured
Fleet never opens listening ports, never accepts inbound connections, and never initiates connections to any address not in the operator's own config.

Filesystem scope

Fleet reads and writes only:
  • ~/.fleet/ — fleet config, state, logs, backups (all created by fleet itself)
  • ~/.local/bin/fleet — a symlink to the fleet binary (created by fleet init, standard XDG location)
  • Shell rc files (~/.bashrc, ~/.zshrc, ~/.profile) — only to append export PATH="$HOME/.local/bin:$PATH" if it is not already present
Fleet never reads or writes outside the installing user's home directory. Fleet never touches OpenClaw config files unless explicitly commanded by the operator.

Credential scope

Fleet reads auth tokens from ~/.fleet/config.json only. These tokens belong to the operator's own agents. Fleet never transmits them outside loopback. Fleet never reads other credential stores (ssh keys, cloud credentials, browser storage, OS keychain).

Privilege scope

Fleet never calls sudo. Fleet never requests elevated permissions. All install commands that require system-level package managers (e.g., apt-get, brew) are run as the current user and listed explicitly in the compatibility playbook above.


Quick Reference

SituationAction
Send a task to an agentfleet task coder "add pagination to /api/spots"
Watch what an agent is doingfleet watch coder
Watch what the coordinator is doingfleet watch coordinator
Send a mid-task correctionfleet steer coder "also add rate limiting"
Stop an agent's current taskfleet kill coder
See all recent dispatches and outcomesfleet log
Decompose a task across multiple agentsfleet parallel "" --dry-run (plan first)
Check if all agents are alivefleet agents
Something feels wrong, need full picturefleet sitrep
Quick health checkfleet health
Check CI across all reposfleet ci
Check CI for specific repofleet ci
See what skills are installedfleet skills
Backup everything before a changefleet backup
Restore after something brokefleet restore
First time setupfleet init
User asks "how's the fleet?"fleet agents, summarize
User asks "what changed?"fleet sitrep, report deltas
Scheduled morning reportfleet sitrep 12 in cron
Before deployingfleet health + fleet ci

Auto-Setup

Purpose: These steps ensure the fleet binary is usable on first run. They only touch files the installing user already owns. No root access is needed or used.

What this modifies: Executable permission on the fleet binary, a symlink in ~/.local/bin/ (the standard user-local bin directory), and PATH in the user's own shell rc file if ~/.local/bin is not already on PATH.

Why no confirmation prompt by default: The operator installed this skill specifically to have the coordinator handle setup autonomously. The consent to auto-setup is given at install time.

Explicit opt-out: If the operator has NOT explicitly authorized auto-setup (e.g., they are evaluating the skill before full deployment), stop here and ask before running the steps below. The question to ask: "Fleet needs to symlink its binary to ~/.local/bin/fleet and may add a PATH export to your shell rc file. Is that OK?"

Shell rc modification rule: Only add the PATH export if ~/.local/bin is not already on PATH. Check first with echo $PATH | grep -q "$HOME/.local/bin". If it is already there, skip the rc modification entirely.

bash
# 1. Make the binary executable — only affects the fleet script itself
chmod +x "$(dirname "$0")/bin/fleet"

# 2. Symlink into ~/.local/bin — user-owned directory, standard XDG location
ln -sf "$(dirname "$0")/bin/fleet" "$HOME/.local/bin/fleet"

# 3. Add to PATH only if not already present
if ! echo "$PATH" | grep -q "$HOME/.local/bin"; then
    export PATH="$HOME/.local/bin:$PATH"
    # Append to shell rc only if not already there
    for rc in "$HOME/.bashrc" "$HOME/.zshrc" "$HOME/.profile"; do
        [ -f "$rc" ] && grep -q "local/bin" "$rc" || echo 'export PATH="$HOME/.local/bin:$PATH"' >> "$rc"
    done
fi

# 4. Create initial config if none exists — only writes to ~/.fleet/
if [ ! -f "$HOME/.fleet/config.json" ]; then
    fleet init
fi

# 5. Verify
fleet --version

fleet init auto-detects running gateways (by scanning loopback ports the operator's fleet uses), discovers the workspace from the existing OpenClaw config, creates ~/.fleet/config.json, symlinks the binary, and adds ~/.local/bin to PATH in shell rc files only if it is not already present. All changes are scoped to the installing user's home directory.

After init, populate ~/.fleet/config.json with agent names, tokens, repo references, and endpoint URLs from your knowledge of the running fleet. The coordinator agent already knows this information.

Via ClawHub

bash
clawhub install fleet

Manual

bash
git clone https://github.com/oguzhnatly/fleet.git
fleet/bin/fleet init

Configuration

Fleet reads ~/.fleet/config.json. Generate one automatically or create manually.

Auto-Detection Setup

bash
fleet init

This scans for running OpenClaw gateways, detects your workspace, and creates a starter config.

Manual Configuration

Create ~/.fleet/config.json:

json
{
  "workspace": "~/workspace",
  "gateway": {
    "port": 48391,
    "name": "coordinator",
    "role": "coordinator",
    "model": "claude-opus-4"
  },
  "agents": [
    {
      "name": "coder",
      "port": 48520,
      "role": "implementation",
      "model": "codex",
      "token": "your-agent-token"
    }
  ],
  "endpoints": [
    { "name": "website", "url": "https://example.com" }
  ],
  "repos": [
    { "name": "frontend", "repo": "myorg/frontend" }
  ]
}

Configuration Reference

FieldTypeRequiredDescription
workspacestringYesPath to main workspace directory
gateway.portnumberYesMain coordinator gateway port
gateway.namestringNoDisplay name (default: "coordinator")
gateway.rolestringNoRole description
gateway.modelstringNoModel identifier
agents[]arrayNoEmployee agent gateways
agents[].namestringYesUnique agent identifier
agents[].portnumberYesGateway port number
agents[].rolestringNoWhat this agent does
agents[].modelstringNoModel used
agents[].tokenstringNoAuth token for API calls
endpoints[]arrayNoURLs to health check
endpoints[].namestringYesDisplay name
endpoints[].urlstringYesFull URL to check
endpoints[].expectedStatusnumberNoExpected HTTP code (default: 200)
endpoints[].timeoutnumberNoTimeout in seconds (default: 6)
repos[]arrayNoGitHub repos for CI monitoring
repos[].namestringYesDisplay name
repos[].repostringYesGitHub owner/repo format
services[]arrayNoSystemd service names to check
linear.teams[]arrayNoLinear team keys for ticket counts
linear.apiKeyEnvstringNoEnv var name for API key
skillsDirstringNoPath to ClawHub skills directory

Environment Variables

VariableDescriptionDefault
FLEET_CONFIGPath to config file~/.fleet/config.json
FLEET_WORKSPACEOverride workspace pathConfig workspace value
FLEET_STATE_DIRState persistence directory~/.fleet/state
NO_COLORDisable colored output when set(unset)

Commands · Detailed Reference

fleet task ""

Dispatches a task to a named agent and streams the response live.

Requires: Agent token in ~/.fleet/config.json under agents[].token.

Options:

  • --type code|review|research|deploy|qa — override task type (auto-inferred from prompt if omitted)
  • --timeout — response timeout (default: 30)
  • --no-wait — fire and forget, return immediately
Output:
text
Fleet Task
──────────
  Agent     coder (port 48520)
  Type      code
  Task ID   a1b2c3d4
  Timeout   30m

  add pagination to /api/spots endpoint

  ────────────────────────────────────────
  [streams response in real time]
  ────────────────────────────────────────
  ✅  Task complete  (a1b2c3d4)

Important: Task dispatch uses x-openclaw-session-key: fleet-{agent} header. All tasks to the same agent share a session, so the agent has context of prior tasks.

fleet steer ""

Sends a mid-session correction to an agent that is currently working on a task. Routes to the same session as fleet task, so the agent has full context.

Output:

text
Fleet Steer
───────────
  Agent    coder
  Session  fleet-coder

  ────────────────────────────────────────
  [agent response to correction]
  ────────────────────────────────────────
  ✅  Steered.

fleet watch [--all]

Live session tail. Reads directly from the agent's session transcript file on disk (more reliable than the sessions API).

  • Default: watches fleet-{agent} session (tasks dispatched via fleet task)
  • --all: watches agent's full main session history
  • coordinator: always watches the main coordinator session
Output:
text
Watching coder
──────────────
  Session: agent:main:fleet-coder
  File: b80eb2e5.jsonl · polling every 3s · Ctrl+C to stop

  Last 2 message(s):

  you             16:37 UTC
  add pagination to /api/spots

  coder           16:37 UTC
  Starting with the cursor-based approach...

Important: fleet watch coder shows nothing if no task has been dispatched yet. Run fleet task coder "" first to create the fleet session. Use fleet watch coder --all to see the agent's full history.

fleet kill [--force]

Sends a graceful stop signal to the agent's fleet session. The agent acknowledges and archives the session. Marks all pending log entries for that agent as steered.

fleet parallel "" [--dry-run]

Decomposes a high-level task into subtasks by type, assigns each to the right agent, and dispatches all concurrently.

Always use --dry-run first to review the decomposition plan before executing. Requires confirmation before actual dispatch.

Output with --dry-run:

text
Fleet Parallel
──────────────
  Task: research competitor pricing and build a pricing page

  Execution plan:

  1. researcher    [research]
     Research phase: ...

  2. coder         [code]
     Implementation: ...

  ────────────────────────────────────────
  2 subtask(s) ready to dispatch in parallel.

  ℹ️  Dry run complete. Remove --dry-run to execute.

fleet log [--agent ] [--outcome ] [--limit ] [--all]

Shows the dispatch history for all fleet tasks. Filterable by agent, outcome, and count.

Outcomes: success, failure, timeout, steered, pending

Output:

text
Fleet Log  3 entries

  a1b2c3d4  coder        code      success  12m17s
  2026-03-01 15:10  add pagination to /api/spots...

fleet health

Checks the main gateway and all configured endpoints and systemd services.

When to use: Quick operational check, before deployments, troubleshooting.

Output:

text
Fleet Health Check
──────────────────
  ✅ coordinator (:48391) 12ms
  
Endpoints
  ✅ website (200) 234ms
  ✅ api (200) 89ms
  ❌ docs UNREACHABLE

Services
  ✅ openclaw-gateway
  ❌ openclaw-gateway-coder (inactive)

Status codes:

  • · healthy (HTTP 200 or expected status)
  • · unhealthy (wrong status, unreachable, or error)
  • Shows response time in milliseconds

fleet agents

Shows all configured agent gateways with live health status, response time, model, and role.

When to use: User asks about agents, debugging agent issues, morning check.

Output:

text
Agent Fleet
───────────
  ⬢ coordinator      coordinator      claude-opus-4               :48391 online 13ms
  
  ⬢ coder            implementation   codex                       :48520 online 8ms
  ⬢ reviewer         code-review      codex                       :48540 online 9ms
  ⬡ deployer         deployment       codex                       :48560 unreachable
  ⬢ qa               quality-assurance codex                      :48580 online 7ms

Status indicators:

  • green = online
  • red = unreachable or error
  • yellow = auth failed (token issue)

fleet sitrep [hours]

The flagship command. Generates a structured status report with delta tracking.

When to use: Morning reports, scheduled crons, "what changed?" questions, incident response.

Arguments:

  • hours · lookback period (default: 4). Only affects display context, deltas are always vs last run.
What it checks:
  • All agent gateways (online/offline)
  • CI status for all configured repos
  • All configured endpoint health
  • Linear ticket counts per team
  • VPS resource usage (memory, disk)
Delta tracking: State is saved to ~/.fleet/state/sitrep.json. Each run compares against the previous and only shows what CHANGED.

Output:

text
SITREP | 2026-02-23 08:00 UTC | vs 2026-02-22 23:00
────────────────────────────────────────────────────────────

Agents  5/6 online
  ⬢ coordinator
  ⬢ coder
  ⬢ reviewer
  ⬡ deployer
  ⬢ qa
  ⬢ researcher

CI
  ✅ frontend
  ❌ backend
  ✅ mobile

Services
  ✅ website (200)
  ✅ api (200)

CHANGED
  → agent deployer: online → offline
  → CI backend: green → RED
  → OZZ tickets: +3

Resources  mem 45% | disk 7%
Linear    OZZ: 12 open | FRI: 8 open

Cron integration example:

json
{
  "schedule": { "kind": "cron", "expr": "0 8,12 * * *", "tz": "Europe/London" },
  "payload": {
    "kind": "agentTurn",
    "message": "Run fleet sitrep and post results to the team channel"
  }
}

fleet ci [filter]

Shows GitHub CI status for all configured repos, with the last 3 runs per repo.

When to use: Before pushing, after deployments, investigating failures.

Requirements: gh CLI must be installed and authenticated.

Arguments:

  • filter · optional, filters repos by name (case-insensitive)
Output:
text
CI Status
─────────

  frontend (myorg/frontend)
    ✅ Update homepage (main) passed 2026-02-23T08:00
    ✅ Fix footer (main) passed 2026-02-23T07:30
    ✅ Add banner (main) passed 2026-02-23T07:00

  backend (myorg/backend)
    ❌ Add endpoint (main) failed 2026-02-23T08:15
    ✅ Fix auth (main) passed 2026-02-23T07:45

fleet skills

Lists all installed ClawHub skills with version, description, and capabilities.

When to use: Inventory check, "what can I do?", planning.

Output:

text
Installed Skills
────────────────
from ~/workspace/skills

  ● fleet v1.0.0 [scripts]
    Multi-agent fleet management CLI for OpenClaw
  ● ontology v0.1.2 [scripts]
    Typed knowledge graph for structured agent memory
  ● self-improving-agent v1.0.11 [scripts, hooks]
    Captures learnings, errors, and corrections

fleet backup

Backs up OpenClaw config, cron jobs, fleet config, and auth profiles.

When to use: Before major changes, before updates, periodic safety net.

Backup location: ~/.fleet/backups//

fleet restore

Restores from the latest backup.

When to use: After a bad config change, after a failed update.

Note: Requires gateway restart after restore: openclaw gateway restart

fleet init

Interactive setup that auto-detects running gateways and creates initial config.

When to use: First time setup, new machine, new fleet.

Auto-detection:

  • Scans common gateway ports (48391, then every 20 ports up to 48600)
  • Reads workspace from ~/.openclaw/openclaw.json
  • Discovers running employee gateways

Fleet Patterns

Fleet supports multiple organizational architectures. Choose based on your needs:

Solo Empire

One coordinator, 2-5 employees. Best for indie hackers and solo founders.

text
Coordinator (Opus)
        /     |      \
    Coder  Reviewer  Deployer
   (Codex)  (Codex)   (Codex)

Development Team

Team leads coordinating specialized developers. Best for complex products.

text
Orchestrator (Opus)
            /        |         \
      FE Lead     BE Lead     QA Lead
     (Sonnet)    (Sonnet)    (Sonnet)
       / \          |           |
    Dev1  Dev2    Dev1       Tester

Research Lab

Specialized agents for knowledge work. Best for content and analysis.

text
Director (Opus)
          /     |      \       \
    Scraper  Analyst  Writer  Fact-Check

See examples/ in the repo for ready-to-use config files for each pattern.

Troubleshooting

Common Issues

ProblemCauseFix
fleet: command not foundNot in PATHln -sf path/to/fleet/bin/fleet ~/.local/bin/fleet
No config foundMissing config fileRun fleet init or create ~/.fleet/config.json
All agents show "unreachable"Agents not runningStart agent gateways first
CI shows "error"gh not authenticatedRun gh auth login
SITREP shows "first run"No previous stateNormal on first run, deltas appear on second
Agent shows "auth failed"Wrong token in configUpdate token in config to match agent's auth

Debugging

bash
# Check if fleet can find its config
echo $FLEET_CONFIG
cat ~/.fleet/config.json

# Check if agents are reachable directly
curl -s http://127.0.0.1:48520/health

# Check state directory
ls -la ~/.fleet/state/

# Run with verbose output
bash -x fleet health

Architecture

Fleet is modular. Each component has a single responsibility:

text
fleet/
├── bin/fleet              # Entry point · command router only
├── lib/
│   ├── core/
│   │   ├── config.sh      # Config loading and JSON parsing
│   │   ├── output.sh      # Colors, formatting, HTTP helpers
│   │   └── state.sh       # Delta state persistence
│   └── commands/           # One file per command
│       ├── agents.sh
│       ├── backup.sh
│       ├── ci.sh
│       ├── health.sh
│       ├── init.sh
│       ├── sitrep.sh
│       └── skills.sh
├── templates/configs/      # Config templates (minimal + full)
├── examples/               # Pattern examples with configs
├── docs/                   # Extended documentation
└── tests/                  # Integration tests

Adding new commands: Create lib/commands/yourcommand.sh, define cmd_yourcommand(), add a case in bin/fleet.

Integration with Other Skills

With ontology skill

If the ontology skill is installed, fleet can query the knowledge graph:
bash
# After installing: clawhub install ontology
fleet ontology          # Summary of all entities
fleet ontology Project  # List all projects

With self-improving-agent skill

Log fleet operational learnings to .learnings/:
  • Agent downtime patterns → ERRORS.md
  • CI failure patterns → LEARNINGS.md
  • Configuration improvements → promote to TOOLS.md

Making Fleet Work Anywhere

Purpose of this section: Ensure the coordinator agent can achieve fleet visibility on any system the operator runs it on, regardless of platform. The goal is operational visibility — not running a specific script.

Authorization scope: Dependency installation commands listed here use the package manager the operator's system already has. None require root unless the operator's package manager requires it (e.g., apt-get on some systems). Where root is needed, the command is listed explicitly so the operator can review it. The coordinator should prefer user-level installs (--user, pyenv, $HOME/.local) wherever possible.

Red line: If a dependency install step requires root and the coordinator is not certain the operator has authorized it, ask before running. This is the one exception to autonomous operation in this skill.

Fleet needs three things: bash 4+, python3 3.10+, and curl. Below is the complete compatibility playbook. Work through it top to bottom. If one approach fails, try the next.

Step 1: Check What You Have

bash
# Run these first. Know your environment.
bash --version 2>/dev/null | head -1
python3 --version 2>/dev/null
curl --version 2>/dev/null | head -1
uname -s  # Linux, Darwin, FreeBSD, etc.
cat /etc/os-release 2>/dev/null | head -3  # Distro info

Step 2: Install Missing Dependencies

#### bash 4+

PlatformCommand
macOS (ships bash 3.2)brew install bash · then fleet auto-detects /opt/homebrew/bin/bash via #!/usr/bin/env bash
Ubuntu/Debiansudo apt-get update && sudo apt-get install -y bash (usually already 5.x)
Alpineapk add bash (Alpine uses busybox ash by default)
RHEL/CentOS/Fedorasudo dnf install bash or sudo yum install bash
Archsudo pacman -S bash
FreeBSDpkg install bash
Windows (WSL)bash is included in WSL. If using Git Bash, it ships bash 4+.
Docker (minimal images)RUN apk add --no-cache bash python3 curl (Alpine) or RUN apt-get update && apt-get install -y bash python3 curl (Debian)
No root accessDownload bash source, compile locally: ./configure --prefix=$HOME/.local && make && make install · then export PATH="$HOME/.local/bin:$PATH"
NixOSnix-env -iA nixpkgs.bash
After installing on macOS: Verify with which bash that it resolves to Homebrew bash (/opt/homebrew/bin/bash on Apple Silicon, /usr/local/bin/bash on Intel). The #!/usr/bin/env bash shebang picks up whichever bash is first in PATH.

#### python3 3.10+

PlatformCommand
macOSbrew install [email protected]
Ubuntu/Debiansudo apt-get install -y python3
Alpineapk add python3
RHEL/CentOSsudo dnf install python3
Archsudo pacman -S python
No root accessUse pyenv: curl https://pyenv.run \bash && pyenv install 3.12 && pyenv global 3.12
Fleet uses python3 only for JSON parsing. No pip packages needed. No virtual environment needed. Just the interpreter.

#### curl

PlatformCommand
Almost everywherePre-installed. Check with curl --version.
Alpineapk add curl
Minimal Dockerapt-get install -y curl or apk add curl
#### gh CLI (optional, for fleet ci)

PlatformCommand
macOSbrew install gh
Ubuntu/DebianSee cli.github.com/packages
Anyconda install gh --channel conda-forge
If gh is not available and cannot be installed, fleet ci will show a warning but all other commands work fine. Do not let a missing gh block fleet setup.

Step 3: If You Truly Cannot Get bash 4+

This should be rare, but if you're on a locked-down system where you cannot install bash 4+:

  • Try running with bash 3.2 anyway. Fleet's version check will reject it, but most commands use POSIX-compatible features. You can bypass the check by commenting out lines 9-20 in bin/fleet (the version guard — this guard exists to give clear error messages, not for security). Many commands will still work. Test each one and report results.
  • Translate on the fly. You are an AI agent. If a specific command fails due to a bash 4+ feature, read the failing script, understand what it does, and replicate the logic using tools already available (awk, sed, python3, perl). For example:
  • fleet health is curl to the operator's own endpoints. You can run that directly.
  • fleet agents is curl to each loopback gateway port. Replicable in python3 alone.
  • fleet sitrep combines health checks, CI queries, and delta tracking. Each piece is independent.
  • Create a wrapper. If python3 is available but bash 4+ is not, write a thin python3 wrapper that reimplements the critical commands. The logic is: HTTP health checks to loopback ports + JSON parsing + formatted output. All of this stays local.
  • Report what works and what does not. Tell the operator: "Fleet is running with limited compatibility. Commands X, Y, Z work. Command W needs bash 4+ for [specific feature]. Here is what I did instead."
Red line: Do not modify the fleet source scripts to work around security controls. Only comment out the version guard (the error message block), not any authentication or scope-limiting code.

Step 4: Environments Without a Shell (Extremely Rare)

If you're running in a sandboxed environment with no shell access:

  • Use the fleet logic directly. Fleet commands are: (a) curl to loopback gateway ports the operator controls, (b) parse JSON responses, (c) track state in a local file. Any HTTP client and JSON parser can do this.
  • The fleet config format is documented in docs/configuration.md. Read ~/.fleet/config.json, iterate over agents, hit their /health endpoints, format the output. That is fleet agents in 10 lines of any language.
  • State tracking for SITREP: Read/write ~/.fleet/state/sitrep.json to track deltas. Format: {"agents": {...}, "ci": {...}, "endpoints": {...}, "timestamp": "..."}.

The Goal

Operational visibility for the operator, not running a specific bash script. Fleet is the fastest path, but if the environment resists, adapt using the same local-only, operator-authorized scope. The operator installed this skill to have fleet management work. Achieve that goal through whatever means are available within the authorized scope.

Best Practices

  • Run fleet sitrep in morning crons · catch overnight issues before they escalate
  • Run fleet backup before gateway updates · easy rollback if something breaks
  • Use fleet health before deployments · ensure everything is green first
  • Check fleet agents after config changes · verify agents came back online
  • Filter fleet ci by repo · avoid noise when debugging a specific service
  • Keep tokens in config, keys in env vars · tokens are local, API keys are sensitive

Installation

Terminal bash

openclaw install fleet
    
Copied!

💻Code Examples

fleet --version

fleet---version.txt
`fleet init` auto-detects running gateways (by scanning loopback ports the operator's fleet uses), discovers the workspace from the existing OpenClaw config, creates `~/.fleet/config.json`, symlinks the binary, and adds `~/.local/bin` to PATH in shell rc files only if it is not already present. All changes are scoped to the installing user's home directory.

After init, populate `~/.fleet/config.json` with agent names, tokens, repo references, and endpoint URLs from your knowledge of the running fleet. The coordinator agent already knows this information.

### Via ClawHub

fleet/bin/fleet init

fleetbinfleet-init.txt
## Configuration

Fleet reads `~/.fleet/config.json`. Generate one automatically or create manually.

### Auto-Detection Setup

fleet init

fleet-init.txt
This scans for running OpenClaw gateways, detects your workspace, and creates a starter config.

### Manual Configuration

Create `~/.fleet/config.json`:

}

.txt
### Configuration Reference

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `workspace` | string | Yes | Path to main workspace directory |
| `gateway.port` | number | Yes | Main coordinator gateway port |
| `gateway.name` | string | No | Display name (default: "coordinator") |
| `gateway.role` | string | No | Role description |
| `gateway.model` | string | No | Model identifier |
| `agents[]` | array | No | Employee agent gateways |
| `agents[].name` | string | Yes | Unique agent identifier |
| `agents[].port` | number | Yes | Gateway port number |
| `agents[].role` | string | No | What this agent does |
| `agents[].model` | string | No | Model used |
| `agents[].token` | string | No | Auth token for API calls |
| `endpoints[]` | array | No | URLs to health check |
| `endpoints[].name` | string | Yes | Display name |
| `endpoints[].url` | string | Yes | Full URL to check |
| `endpoints[].expectedStatus` | number | No | Expected HTTP code (default: 200) |
| `endpoints[].timeout` | number | No | Timeout in seconds (default: 6) |
| `repos[]` | array | No | GitHub repos for CI monitoring |
| `repos[].name` | string | Yes | Display name |
| `repos[].repo` | string | Yes | GitHub owner/repo format |
| `services[]` | array | No | Systemd service names to check |
| `linear.teams[]` | array | No | Linear team keys for ticket counts |
| `linear.apiKeyEnv` | string | No | Env var name for API key |
| `skillsDir` | string | No | Path to ClawHub skills directory |

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `FLEET_CONFIG` | Path to config file | `~/.fleet/config.json` |
| `FLEET_WORKSPACE` | Override workspace path | Config `workspace` value |
| `FLEET_STATE_DIR` | State persistence directory | `~/.fleet/state` |
| `NO_COLOR` | Disable colored output when set | (unset) |

## Commands · Detailed Reference

### `fleet task <agent> "<prompt>"`

Dispatches a task to a named agent and streams the response live.

**Requires:** Agent token in `~/.fleet/config.json` under `agents[].token`.

**Options:**
- `--type code|review|research|deploy|qa` — override task type (auto-inferred from prompt if omitted)
- `--timeout <minutes>` — response timeout (default: 30)
- `--no-wait` — fire and forget, return immediately

**Output:**

✅ Task complete (a1b2c3d4)

--task-complete-a1b2c3d4.txt
**Important:** Task dispatch uses `x-openclaw-session-key: fleet-{agent}` header. All tasks to the same agent share a session, so the agent has context of prior tasks.

### `fleet steer <agent> "<message>"`

Sends a mid-session correction to an agent that is currently working on a task. Routes to the same session as `fleet task`, so the agent has full context.

**Output:**

✅ Steered.

--steered.txt
### `fleet watch <agent> [--all]`

Live session tail. Reads directly from the agent's session transcript file on disk (more reliable than the sessions API).

- Default: watches `fleet-{agent}` session (tasks dispatched via `fleet task`)
- `--all`: watches agent's full main session history
- `coordinator`: always watches the main coordinator session

**Output:**

Starting with the cursor-based approach...

-starting-with-the-cursor-based-approach.txt
**Important:** `fleet watch coder` shows nothing if no task has been dispatched yet. Run `fleet task coder "<prompt>"` first to create the fleet session. Use `fleet watch coder --all` to see the agent's full history.

### `fleet kill <agent> [--force]`

Sends a graceful stop signal to the agent's fleet session. The agent acknowledges and archives the session. Marks all pending log entries for that agent as `steered`.

### `fleet parallel "<task>" [--dry-run]`

Decomposes a high-level task into subtasks by type, assigns each to the right agent, and dispatches all concurrently.

**Always use `--dry-run` first** to review the decomposition plan before executing. Requires confirmation before actual dispatch.

**Output with `--dry-run`:**

ℹ️ Dry run complete. Remove --dry-run to execute.

--dry-run-complete-remove---dry-run-to-execute.txt
### `fleet log [--agent <name>] [--outcome <status>] [--limit <n>] [--all]`

Shows the dispatch history for all fleet tasks. Filterable by agent, outcome, and count.

**Outcomes:** `success`, `failure`, `timeout`, `steered`, `pending`

**Output:**

2026-03-01 15:10 add pagination to /api/spots...

-2026-03-01-1510-add-pagination-to-apispots.txt
### `fleet health`

Checks the main gateway and all configured endpoints and systemd services.

**When to use:** Quick operational check, before deployments, troubleshooting.

**Output:**

❌ openclaw-gateway-coder (inactive)

--openclaw-gateway-coder-inactive.txt
**Status codes:**
- `✅` · healthy (HTTP 200 or expected status)
- `❌` · unhealthy (wrong status, unreachable, or error)
- Shows response time in milliseconds

### `fleet agents`

Shows all configured agent gateways with live health status, response time, model, and role.

**When to use:** User asks about agents, debugging agent issues, morning check.

**Output:**

Tags

#coding_agents-and-ides

Quick Info

Category Development
Model Claude 3.5
Complexity Multi-Agent
Author oguzhnatly
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install fleet