๐Ÿ“š Step-by-Step Tutorial Intermediate Level โฑ๏ธ 45 minutes

Chaining Multiple Skills

Combine multiple operations into powerful automation workflows using natural language conversation

๐ŸŽฏ
Hands-on
๐Ÿ’ป
Code Examples
๐Ÿ“Š
Real Projects
โœ…
Best Practices
โœ“ Updated: March 2025
โœ“ Beginner Friendly
โœ“ Free Forever
45 minutes read 180 sec read

Chaining Multiple Operations

Combine multiple OpenClaw operations to create powerful automation workflows.

๐ŸŽฏ What Youโ€™ll Learn

How to chain together different OpenClaw capabilities:

  • Create multi-step workflows
  • Pass data between operations
  • Handle errors in workflows
  • Schedule automated tasks
  • Build complex automation pipelines

Real-world example: Build an automated news aggregation and analysis system.


๐Ÿ“‹ Prerequisites


๐Ÿ› ๏ธ Understanding OpenClaw Workflows

OpenClaw doesnโ€™t use complex workflow files - it uses natural language chaining. You can chain operations by:

  1. Sequential requests: โ€œFirst do X, then do Y, then do Zโ€
  2. Context preservation: OpenClaw remembers previous operations
  3. Natural language flow: Describe your workflow conversationally
  4. Reusable skills: Define skills for common operations

๐Ÿ“ Step 1: Your First Chained Workflow (5 minutes)

Start the Gateway

openclaw gateway --port 18789 --verbose

Open WebChat

Navigate to:

http://localhost:18789

Basic Chaining Example

Try this:

First, go to https://techcrunch.com and extract the latest 10 headlines.
Then, save those headlines to a file called techcrunch-headlines.json.
Finally, read that file and create a summary of the key themes.

What happens:

  1. OpenClaw scrapes the headlines
  2. Saves them to a JSON file
  3. Reads the file back
  4. Analyzes and summarizes the content

All in one continuous conversation!


๐Ÿ”„ Step 2: Multi-Source Data Aggregation (10 minutes)

Scraping Multiple Sites

I want to create a comprehensive tech news digest:

1. Scrape the top 15 headlines from TechCrunch
2. Scrape the top 15 headlines from The Verge
3. Scrape the top 15 headlines from Ars Technica
4. Combine all headlines into one list
5. Remove duplicates based on similarity
6. Sort by publication date
7. Save the combined feed to tech-news-digest.json
8. Create a summary of the main themes

Adding Data Enrichment

For each headline in the digest:
1. Use the headline to search for the original article
2. Extract the full article text
3. Generate a 2-sentence summary
4. Add the summary to the digest entry
5. Update the JSON file with enriched data

๐Ÿ“Š Step 3: Processing Pipelines (12 minutes)

E-commerce Price Monitoring Pipeline

Create a price monitoring workflow:

1. Go to https://amazon.com/dp/PRODUCT_ID
2. Extract the current price
3. Compare with the price in ~/price-history.json
4. If price changed:
   - Add new entry with timestamp
   - Calculate price difference
   - Update price-history.json
5. If price dropped by more than 10%:
   - Send me an alert with the details

Data Processing Pipeline

Process sales data:

1. Read all CSV files from ~/sales-data/2025/
2. For each CSV:
   - Validate the data format
   - Remove duplicate entries
   - Standardize date formats to ISO 8601
   - Calculate totals per customer
3. Combine all processed data
4. Generate customer summary report
5. Save to ~/reports/customer-summary-2025.json
6. Create a visual chart of sales trends

๐ŸŒ Step 4: Web + File Automation (10 minutes)

Scraping and Storing

Automated article collection:

1. Go to https://news.ycombinator.com
2. Extract the top 20 stories with titles, links, and points
3. For each story:
   - Visit the article URL
   - Extract the main article text
   - Generate a 100-word summary
   - Save summary to ~/articles/[story-id].txt
4. Create an index file with all summaries
5. Save index to ~/articles/index.json

Periodic Monitoring

Set up automated monitoring:

Every hour:
1. Check https://status.example.com for service status
2. If status is not "operational":
   - Take a screenshot as evidence
   - Save screenshot to ~/screenshots/status-[timestamp].png
   - Log the incident to ~/downtime-log.txt
   - Alert me with the details
3. If status returns to normal:
   - Log recovery time
   - Calculate total downtime

๐Ÿ”ง Step 5: Error Handling in Workflows (8 minutes)

Retry Logic

Try to scrape https://fragile-site.com/data
If it fails:
   - Wait 30 seconds
   - Retry up to 3 times
   - If still failing after 3 attempts:
     - Log the error to ~/scraping-errors.txt
     - Alert me about the failure
     - Continue with other tasks

Graceful Degradation

Process customer data:

1. Try to read from database
2. If database is unavailable:
   - Fall back to reading from backup CSV file
   - Log the fallback incident
3. If backup is also unavailable:
   - Alert me immediately
   - Don't proceed with processing

Data Validation

Validate data at each step:

1. Scrape data from website
2. Validate that we got at least 10 items
   - If fewer than 10, this might be an error
3. Try to save to JSON file
4. Validate JSON structure
   - If invalid, try to repair or log error
5. Only proceed with next steps if validation passes

๐Ÿ“… Step 6: Scheduling Automated Workflows (10 minutes)

Daily Tasks

Set up a daily routine:

Every day at 8 AM:
1. Check my email for important messages
2. Scrape top tech news from 3 sources
3. Generate a morning briefing document
4. Save to ~/briefings/morning-[date].txt
5. Send me a notification when ready

Weekly Reports

Create weekly summary:

Every Sunday evening:
1. Collect all files from ~/tasks/completed-[week]/
2. Generate summary of completed tasks
3. Calculate productivity metrics
4. Create weekly report in markdown
5. Save to ~/reports/weekly-[week].md
6. Email the report to me

Conditional Automation

Smart file organization:

When new files appear in ~/Downloads/:
1. Check the file type
2. If it's a document:
   - Extract metadata (author, date, title)
   - Rename file using metadata
   - Move to appropriate folder
3. If it's an image:
   - Check date taken from EXIF data
   - Organize by date: ~/Pictures/YYYY/MM/
   - Create thumbnail
4. If it's a code file:
   - Detect programming language
   - Move to ~/projects/[language]/
5. Log all actions to ~/file-organizer-log.txt

๐Ÿš€ Advanced Workflow Examples

Example 1: Research Assistant

Automated research workflow:

When I ask about a topic:
1. Search for recent articles on Google Scholar
2. Visit top 5 results
3. Extract key findings and citations
4. Cross-reference findings between sources
5. Identify common themes and disagreements
6. Generate annotated bibliography
7. Save to ~/research/[topic]/bibliography.md
8. Create summary document
9. Save summary to ~/research/[topic]/summary.md

Example 2: Social Media Monitor

Social media monitoring:

Every 2 hours:
1. Search Twitter for mentions of "OpenClaw"
2. Extract: tweet text, author, engagement metrics
3. Calculate sentiment score
4. If tweet has >100 likes OR negative sentiment:
   - Save to ~/social-media/important-tweets.json
   - Alert me with the tweet details
5. Generate daily engagement report
6. Save to ~/social-media/daily-report-[date].json

Example 3: Financial Data Processor

Financial data pipeline:

Daily at market close:
1. Fetch stock prices from my watchlist
2. Calculate daily returns
3. Compare with 50-day moving average
4. If price crosses moving average:
   - Generate trade signal
   - Log to ~/trading/signals-[date].json
   - Alert me with the signal
5. Update portfolio values
6. Generate performance report
7. Save to ~/finance/portfolio-performance-[date].pdf

๐Ÿ’ก Tips for Complex Workflows

1. Break Down Complex Tasks

Instead of one giant request:

Don't say: "Do everything at once"

Do say:
"Step 1: Scrape the data"
"Step 2: Validate what we got"
"Step 3: Process the valid data"
"Step 4: Save and report"

2. Use Intermediate Files

Save progress as you go:

1. Scrape data โ†’ save to raw-data.json
2. Process data โ†’ save to processed-data.json
3. Generate report โ†’ save to report.md

This way, if something fails, you don't lose everything.

3. Add Checkpoints

Add validation checkpoints:

1. After scraping, verify we got data
2. After processing, verify output format
3. Before saving, verify file location
4. After saving, verify file contents

4. Log Everything

Maintain an audit trail:

Log all operations to ~/workflow-log.txt:
- What was done
- When it was done
- What was the result
- Any errors encountered

5. Test Incrementally

Test each step before chaining:

First, verify I can scrape Site A
Then, verify I can scrape Site B
Only then, combine them into aggregation workflow

๐Ÿ” Troubleshooting Workflow Issues

Issue: โ€œWorkflow stops midwayโ€

Solution: Add explicit checkpoints:

After each step, verify the result before proceeding.
Alert me if any step fails.
Don't automatically continue if something's wrong.

Issue: โ€œData gets lost between stepsโ€

Solution: Save intermediate results:

After scraping, save to temp-results.json
In the next step, read from temp-results.json
This ensures data persists between operations.

Issue: โ€œToo slowโ€

Solution: Optimize the workflow:

Process in parallel where possible:
"Scrape Site A and Site B at the same time
Then combine the results"

Issue: โ€œMemory issuesโ€

Solution: Process in batches:

Instead of processing all 1000 items at once:
Process in batches of 100
Save each batch
Combine at the end

โœ… Best Practices

1. Clear Instructions

Be specific about what you want:

  • โœ… โ€œExtract article titles and links, save to JSONโ€
  • โŒ โ€œGet the articlesโ€

2. Validation

Add validation steps:

  • โœ… โ€œVerify the file was created successfullyโ€
  • โœ… โ€œCheck that the JSON is valid before proceedingโ€

3. Error Handling

Plan for failures:

  • โœ… โ€œIf the site is down, try the backup siteโ€
  • โœ… โ€œIf scraping fails, log the error and continueโ€

4. Documentation

Comment your workflow:

  • โœ… โ€œStep 1: Data collectionโ€
  • โœ… โ€œStep 2: Data cleaningโ€
  • โœ… โ€œStep 3: Report generationโ€

5. Testing

Test workflows with small data first:

  • โœ… โ€œTest with just 3 items before doing 1000โ€
  • โœ… โ€œRun once manually before scheduling itโ€

๐ŸŽฏ Real-World Workflow Examples

Example 1: Competitive Intelligence

Competitor monitoring workflow:

Daily:
1. Scrape competitor pricing from 3 websites
2. Compare with our pricing
3. Identify price gaps
4. Generate competitive analysis report
5. If competitor lowered prices significantly:
   - Alert me immediately
   - Suggest price adjustments
6. Save all data to ~/competitive-intel/[date]/
7. Update dashboard with trends

Example 2: Content Repurposing

Content automation:

When I publish a blog post:
1. Extract key points from the article
2. Generate 5 social media tweets
3. Create a LinkedIn post
4. Generate email newsletter summary
5. Create an infographic summary
6. Save all content to ~/content-repurposed/[article-id]/
7. Schedule posts across platforms
8. Track engagement metrics

Example 3: Data Quality Pipeline

Data quality monitoring:

Every night:
1. Scan all databases for data quality issues:
   - Missing values
   - Invalid formats
   - Duplicates
   - Outdated records
2. Generate data quality report
3. If issues found:
   - Categorize by severity
   - Create remediation tasks
   - Alert data team
4. Save report to ~/data-quality/daily-[date].json
5. Update metrics dashboard

๐ŸŽฏ Whatโ€™s Next?


๐Ÿ†˜ Need Help?


โฑ๏ธ Total Time: 45 minutes ๐Ÿ“Š Difficulty: Intermediate ๐ŸŽฏ Result: Building complex automation workflows with OpenClaw


๐Ÿ’ก Key Takeaways

  1. Natural Language Workflows: No complex syntax - just describe what you want
  2. Conversation Memory: OpenClaw remembers context across operations
  3. Error Resilient: Build in retries and fallbacks
  4. Scheduling Ready: Automate recurring tasks easily
  5. Composable: Combine web scraping, file processing, and analysis
  6. Production Ready: Suitable for real-world automation tasks

Next: Try chaining together a few operations by describing your workflow step by step!

๐ŸŽ‰

Congratulations!

You've completed this tutorial. Ready for the next challenge?