Mulerouter Skills
Generates images and videos using MuleRouter or MuleRun multimodal APIs.
- Rating
- 3.9 (420 reviews)
- Downloads
- 861 downloads
- Version
- 1.0.0
Overview
Generates images and videos using MuleRouter or MuleRun multimodal APIs.
✨Key Features
Check for existing configuration
Configure if needed
Using `uv` to run scripts
Complete Documentation
View Source →
MuleRouter API
Generate images and videos using MuleRouter or MuleRun multimodal APIs.
Configuration Check
Before running any commands, verify the environment is configured:
Step 1: Check for existing configuration
# Check environment variables
echo "MULEROUTER_BASE_URL: $MULEROUTER_BASE_URL"
echo "MULEROUTER_SITE: $MULEROUTER_SITE"
echo "MULEROUTER_API_KEY: ${MULEROUTER_API_KEY:+[SET]}"
# Check for .env file
ls -la .env 2>/dev/null || echo "No .env file found"
Step 2: Configure if needed
Option A: Environment variables with custom base URL (highest priority)
export MULEROUTER_BASE_URL="https://api.mulerouter.ai" # or your custom API endpoint
export MULEROUTER_API_KEY="your-api-key"
Option B: Environment variables with site (used if base URL not set)
export MULEROUTER_SITE="mulerun" # or "mulerouter"
export MULEROUTER_API_KEY="your-api-key"
Option C: Create .env file
Create .env in the current working directory:
# Option 1: Use custom base URL (takes priority over SITE)
MULEROUTER_BASE_URL=https://api.mulerouter.ai
MULEROUTER_API_KEY=your-api-key
# Option 2: Use site (if BASE_URL not set)
# MULEROUTER_SITE=mulerun
# MULEROUTER_API_KEY=your-api-key
Note: MULEROUTER_BASE_URL takes priority over MULEROUTER_SITE. If both are set, MULEROUTER_BASE_URL is used.
Note: The tool only reads .env from the current directory. Run scripts from the skill root (skills/mulerouter-skills/).
Step 3: Using uv to run scripts
The skill uses uv for dependency management and execution. Make sure uv is installed and available in your PATH.
Run uv sync to install dependencies.
Quick Start
1. List available models
uv run python scripts/list_models.py
2. Check model parameters
uv run python models/alibaba/wan2.6-t2v/generation.py --list-params
3. Generate content
Text-to-Video:
uv run python models/alibaba/wan2.6-t2v/generation.py --prompt "A cat walking through a garden"
Text-to-Image:
uv run python models/alibaba/wan2.6-t2i/generation.py --prompt "A serene mountain lake"
Image-to-Video:
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "https://example.com/photo.jpg" #remote image url
uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "/path/to/local/image.png" #local image path
Image Input
For image parameters (--image, --images, etc.), prefer local file paths over base64.
# Preferred: local file path (auto-converted to base64)
--image /tmp/photo.png
--images ["/tmp/photo.png"]
The skill automatically converts local file paths to base64 before sending to the API. This avoids command-line length limits that occur with raw base64 strings.
Workflow
- Check configuration: verify
MULEROUTER_BASE_URLorMULEROUTER_SITE, andMULEROUTER_API_KEYare set - Install dependencies: run
uv sync - Run
uv run python scripts/list_models.pyto discover available models - Run
uv run python models/to see parameters/ .py --list-params - Execute with appropriate parameters
- Parse output URLs from results
Tips
- For an image generation model, a suggested timeout is 5 minutes.
- For a video generation model, a suggested timeout is 15 minutes.
References
- REFERENCE.md - API configuration and CLI options
- MODELS.md - Complete model specifications
Installation
openclaw install mulerouter-skills
💻Code Examples
ls -la .env 2>/dev/null || echo "No .env file found"
### Step 2: Configure if needed
**Option A: Environment variables with custom base URL (highest priority)**export MULEROUTER_API_KEY="your-api-key"
**Option C: Create .env file**
Create `.env` in the current working directory:# MULEROUTER_API_KEY=your-api-key
**Note:** `MULEROUTER_BASE_URL` takes priority over `MULEROUTER_SITE`. If both are set, `MULEROUTER_BASE_URL` is used.
**Note:** The tool only reads `.env` from the current directory. Run scripts from the skill root (`skills/mulerouter-skills/`).
### Step 3: Using `uv` to run scripts
The skill uses `uv` for dependency management and execution. Make sure `uv` is installed and available in your PATH.
Run `uv sync` to install dependencies.
## Quick Start
### 1. List available modelsuv run python models/alibaba/wan2.6-t2v/generation.py --list-params
### 3. Generate content
**Text-to-Video:**uv run python models/alibaba/wan2.6-i2v/generation.py --prompt "Gentle zoom in" --image "/path/to/local/image.png" #local image path
## Image Input
For image parameters (`--image`, `--images`, etc.), **prefer local file paths** over base64.# Check environment variables
echo "MULEROUTER_BASE_URL: $MULEROUTER_BASE_URL"
echo "MULEROUTER_SITE: $MULEROUTER_SITE"
echo "MULEROUTER_API_KEY: ${MULEROUTER_API_KEY:+[SET]}"
# Check for .env file
ls -la .env 2>/dev/null || echo "No .env file found"# Option 1: Use custom base URL (takes priority over SITE)
MULEROUTER_BASE_URL=https://api.mulerouter.ai
MULEROUTER_API_KEY=your-api-key
# Option 2: Use site (if BASE_URL not set)
# MULEROUTER_SITE=mulerun
# MULEROUTER_API_KEY=your-api-key# Preferred: local file path (auto-converted to base64)
--image /tmp/photo.png
--images ["/tmp/photo.png"]Tags
Quick Info
Ready to Install?
Get started with this skill in seconds
Related Skills
4claw
4claw — a moderated imageboard for AI agents.
Aap Passport
Agent Attestation Protocol - The Reverse Turing Test.
Acestep Lyrics Transcription
Transcribe audio to timestamped lyrics using OpenAI Whisper or ElevenLabs Scribe API.
Adaptive Suite
A continuously adaptive skill suite that empowers Clawdbot.