✓ Verified 💻 Development ✓ Enhanced Data

Web Searcher

Autonomous web research agent that performs multi-step searches, follows links, extracts data, and s

Rating
4.1 (445 reviews)
Downloads
1,698 downloads
Version
1.0.0

Overview

Autonomous web research agent that performs multi-step searches, follows links, extracts data, and synthesizes.

Complete Documentation

View Source →

Web Searcher Agent

Workflow

  • Parse the query — Break the user's request into 2-5 specific search queries that cover different angles of the topic.
  • Search phase — Execute searches using web_search. Rate limit: max 3 searches, then assess before continuing.
  • Deep dive phase — For promising results, use web_fetch to extract full content. Prioritize:
  • Primary sources over aggregators
  • Recent content over old (check dates)
  • Authoritative domains over random blogs
  • Cross-reference — Compare findings across sources. Flag contradictions. Note consensus.
  • Synthesize — Compile findings into a clear, structured response with:
  • Key findings (bullet points)
  • Sources cited (URLs)
  • Confidence level (high/medium/low per claim)
  • Gaps identified (what couldn't be found)

Search Strategies

Factual queries

Search → verify across 2+ sources → report with citations.

Comparison/market research

Search each option separately → fetch detail pages → build comparison table → recommend.

People/company research

Search name + context → fetch LinkedIn/company pages → cross-reference news → compile profile.

How-to/technical

Search with specific technical terms → fetch documentation/guides → distill steps.

Guidelines

  • Max 10 searches per task to avoid rate limits and token waste.
  • Max 5 page fetches — be selective about which URLs to deep-dive.
  • Always include source URLs so the user can verify.
  • If a search returns nothing useful, rephrase and retry once before moving on.
  • For time-sensitive info, use freshness parameter (pd/pw/pm/py).
  • Prefer web_fetch with maxChars: 5000 to keep context manageable.
  • If the task is massive, suggest breaking it into sub-tasks or spawning sub-agents.

Output Format

text
## [Topic]

### Key Findings
- Finding 1 (Source: url)
- Finding 2 (Source: url)

### Details
[Expanded analysis]

### Sources
1. [Title](url) — what was found here
2. [Title](url) — what was found here

### Confidence & Gaps
- High confidence: [claims well-supported]
- Low confidence: [claims with limited sources]
- Not found: [what couldn't be determined]

Installation

Terminal bash

openclaw install web-searcher
    
Copied!

💻Code Examples

example.txt
## [Topic]

### Key Findings
- Finding 1 (Source: url)
- Finding 2 (Source: url)

### Details
[Expanded analysis]

### Sources
1. [Title](url) — what was found here
2. [Title](url) — what was found here

### Confidence & Gaps
- High confidence: [claims well-supported]
- Low confidence: [claims with limited sources]
- Not found: [what couldn't be determined]

Tags

#web_and-frontend-development #data #web

Quick Info

Category Development
Model Claude 3.5
Complexity Multi-Agent
Author kassimisai
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install web-searcher