Install Llm Council
LLM Council — multi-model consensus app with one-command setup.
- Rating
- 4 (116 reviews)
- Downloads
- 935 downloads
- Version
- 1.0.0
Overview
LLM Council — multi-model consensus app with one-command setup.
Complete Documentation
View Source →
LLM Council (with Installer)
LLM Council — ask one question to many models, let them critique each other, get a synthesized chairman answer.
This skill is the fastest way to run it: one command installs dependencies, configures credentials, and launches both backend and frontend. No manual setup, no API key prompts.
OpenClaw-native: Credentials resolve automatically from OpenClaw config or workspace .env. Falls back to the local OpenClaw gateway (port 18789) if no OpenRouter key is found.
Two Ways to Use LLM Council
| Mode | Best For | Command |
|---|---|---|
| Quick answer | Fast decisions, mobile, casual questions | /council "Your question" (requires ask-council skill) |
| Full discussion | Deep research, exploring disagreements, seeing all model responses | /install-llm-council then open browser at :5173 |
Slash Command
/install-llm-council [--mode auto|dev|preview] [--dir PATH]
When the user says /install-llm-council, run:
bash ~/.openclaw/skills/install-llm-council/install.sh
The script will:
- Resolve credentials — env var → workspace
.env→ OpenClaw local gateway (no prompt ever) - Clone or pull
https://github.com/jeadland/llm-councilto~/workspace/llm-council uv sync— Python backend dependenciesnpm ci— frontend dependencies- Write
.env— API key/URL for OpenRouter direct or OpenClaw gateway mode - Start app — uses hardened
start.shwith mode-aware startup and health checks - Auto-handle port conflicts — selects safe fallback ports when defaults are busy
- Print practical access URLs — Caddy route and common direct fallbacks
Flags
| Flag | Default | Description |
|---|---|---|
| --mode auto | auto | Detect Caddy on :5173 and prefer preview mode; otherwise dev mode |
| --mode dev | — | Run Vite dev server (hot reload, port 5173 default) |
| --mode preview | — | Build + run Vite preview (port 4173 default) |
| --dir PATH | ~/workspace/llm-council | Override clone directory |
Credential Resolution (OpenClaw-native)
The installer never prompts for API keys. It resolves credentials in this order:
- Environment —
OPENROUTER_API_KEYalready exported - Workspace
.env—~/.openclaw/workspace/.envcontainsOPENROUTER_API_KEY=... - OpenClaw gateway — reads
~/.openclaw/openclaw.json→gateway.auth.token+gateway.port - Sets
OPENROUTER_API_URL=http://127.0.0.1:in/v1/chat/completions .env - Uses the gateway token as the bearer key (OpenAI-compatible endpoint)
Ports
| Service | Port | Notes |
|---|---|---|
| Backend (FastAPI) | 8001 | Always |
| Frontend dev | 5173 | --mode dev (default) |
| Frontend preview | 4173 | --mode preview |
Files
| File | Purpose |
|---|---|
| SKILL.md | This file — skill documentation |
| install.sh | Main one-shot installer/launcher |
| stop.sh | Stop background services |
| status.sh | Check if services are running |
| pids | Saved PIDs for background processes |
Agent Instructions
When user says /install-llm-council or "install llm-council" or "start llm council":
bash ~/.openclaw/skills/install-llm-council/install.sh
Report back the access URL from the script output (e.g. http://10.0.1.X:5173).
To stop:
bash ~/.openclaw/skills/install-llm-council/stop.sh
To check status:
bash ~/.openclaw/skills/install-llm-council/status.sh
Example Output
✅ LLM Council installed and running!
Mode: dev
API: openrouter
Backend: http://127.0.0.1:8001
Frontend: http://10.0.1.42:5173
Stop: bash ~/.openclaw/skills/install-llm-council/stop.sh
Status: bash ~/.openclaw/skills/install-llm-council/status.sh
Installation
openclaw install install-llm-council
💻Code Examples
bash ~/.openclaw/skills/install-llm-council/install.sh
The script will:
1. **Resolve credentials** — env var → workspace `.env` → OpenClaw local gateway (no prompt ever)
2. **Clone or pull** `https://github.com/jeadland/llm-council` to `~/workspace/llm-council`
3. **`uv sync`** — Python backend dependencies
4. **`npm ci`** — frontend dependencies
5. **Write `.env`** — API key/URL for OpenRouter direct or OpenClaw gateway mode
6. **Start app** — uses hardened `start.sh` with mode-aware startup and health checks
7. **Auto-handle port conflicts** — selects safe fallback ports when defaults are busy
8. **Print practical access URLs** — Caddy route and common direct fallbacks
## Flags
| Flag | Default | Description |
|------|---------|-------------|
| `--mode auto` | `auto` | Detect Caddy on :5173 and prefer preview mode; otherwise dev mode |
| `--mode dev` | — | Run Vite dev server (hot reload, port 5173 default) |
| `--mode preview` | — | Build + run Vite preview (port 4173 default) |
| `--dir PATH` | `~/workspace/llm-council` | Override clone directory |
## Credential Resolution (OpenClaw-native)
The installer **never prompts** for API keys. It resolves credentials in this order:
1. **Environment** — `OPENROUTER_API_KEY` already exported
2. **Workspace `.env`** — `~/.openclaw/workspace/.env` contains `OPENROUTER_API_KEY=...`
3. **OpenClaw gateway** — reads `~/.openclaw/openclaw.json` → `gateway.auth.token` + `gateway.port`
- Sets `OPENROUTER_API_URL=http://127.0.0.1:<port>/v1/chat/completions` in `.env`
- Uses the gateway token as the bearer key (OpenAI-compatible endpoint)
## Ports
| Service | Port | Notes |
|---------|------|-------|
| Backend (FastAPI) | 8001 | Always |
| Frontend dev | 5173 | `--mode dev` (default) |
| Frontend preview | 4173 | `--mode preview` |
## Files
| File | Purpose |
|------|---------|
| `SKILL.md` | This file — skill documentation |
| `install.sh` | Main one-shot installer/launcher |
| `stop.sh` | Stop background services |
| `status.sh` | Check if services are running |
| `pids` | Saved PIDs for background processes |
## Agent Instructions
When user says `/install-llm-council` or "install llm-council" or "start llm council":bash ~/.openclaw/skills/install-llm-council/install.sh
Report back the access URL from the script output (e.g. `http://10.0.1.X:5173`).
To stop:✅ LLM Council installed and running!
Mode: dev
API: openrouter
Backend: http://127.0.0.1:8001
Frontend: http://10.0.1.42:5173
Stop: bash ~/.openclaw/skills/install-llm-council/stop.sh
Status: bash ~/.openclaw/skills/install-llm-council/status.shTags
Quick Info
Ready to Install?
Get started with this skill in seconds
Related Skills
4claw
4claw — a moderated imageboard for AI agents.
Aap Passport
Agent Attestation Protocol - The Reverse Turing Test.
Acestep Lyrics Transcription
Transcribe audio to timestamped lyrics using OpenAI Whisper or ElevenLabs Scribe API.
Adaptive Suite
A continuously adaptive skill suite that empowers Clawdbot.