✓ Verified 💻 Development ✓ Enhanced Data

Web Multi Search

Search the web using multiple search engines simultaneously (Bing, Yahoo, Startpage, Aol, Ask)

Rating
4.7 (78 reviews)
Downloads
7,726 downloads
Version
1.0.0

Overview

Search the web using multiple search engines simultaneously (Bing, Yahoo, Startpage, Aol, Ask)

Complete Documentation

View Source →

Web Multi-Search

Search the web across multiple search engines at once using async-search-scraper. Collects results from Bing, Yahoo, Startpage, Aol, and Ask, iterating through multiple result pages.

Setup

bash
cd skills/web-multi-search
python3 -m pip install -r requirements.txt
python3 -m pip install git+https://github.com/soxoj/async-search-scraper.git --no-deps

Note: The library must be installed from the GitHub URL, not PyPI. The --no-deps flag is required because the library pins bs4 (wrong package name); the real dependencies are already in requirements.txt.

Linux (apt) fallback

If pip isn't available, install the system packages:

bash
sudo apt-get update
sudo apt-get install -y python3-requests python3-aiohttp python3-aiohttp-socks python3-bs4

Usage

Run the search script with a query:

bash
python3 web_multi_search.py "your search query"

Options

FlagDefaultDescription
--pages3Number of result pages per engine
--enginesall workingComma-separated list: bing,yahoo,startpage,aol,ask
--proxynoneHTTP/SOCKS proxy URL
--timeout10HTTP timeout in seconds
--outputjsonOutput format: json, csv, text
--unique-urlsoffDeduplicate results by URL
--unique-domainsoffDeduplicate results by domain

Examples

bash
# Basic search, 3 pages per engine, JSON output
python3 web_multi_search.py "python async tutorial"

# Search only Bing and Yahoo, 5 pages each
python3 web_multi_search.py "machine learning" --engines bing,yahoo --pages 5

# Unique URLs only, CSV output
python3 web_multi_search.py "OpenClaw skills" --unique-urls --output csv

# Use a proxy
python3 web_multi_search.py "privacy tools" --proxy socks5://127.0.0.1:9050

Output format

JSON output (default) returns an array of result objects:

json
[
  {
    "engine": "Bing",
    "host": "example.com",
    "link": "https://example.com/page",
    "title": "Page Title",
    "text": "Snippet of the page content..."
  }
]

How the agent should use this

When you need to search the web for information:

  • Run the script with the user's query.
  • Parse the JSON output to extract relevant links and snippets.
  • Use the results to answer the user or to fetch specific pages for deeper reading.
bash
# Quick search and capture output
RESULTS=$(python3 /path/to/skills/web-multi-search/web_multi_search.py "query here" 2>/dev/null)
echo "$RESULTS" | python3 -c "import json,sys; data=json.load(sys.stdin); [print(r['link'], r['title']) for r in data[:10]]"

Available search engines

EngineStatus
BingWorking
YahooWorking
StartpageWorking
AolWorking
AskWorking
GoogleNot working (requires JS)
DuckDuckGoNot working (CAPTCHA)
DogpileNot working (HTTP 403)
MojeekNot working (HTTP 403)
QwantNot working (HTTP 403)
TorchRequires TOR proxy

Troubleshooting

  • Import errors: Make sure you installed the library from the GitHub URL with --no-deps.
  • Empty results / bans: Search engines may rate-limit. Increase delay or use fewer pages.
  • Torch engine: Only works with a running TOR proxy at socks5://127.0.0.1:9050.

Installation

Terminal bash

openclaw install web-multi-search
    
Copied!

💻Code Examples

python3 -m pip install git+https://github.com/soxoj/async-search-scraper.git --no-deps

python3--m-pip-install-githttpsgithubcomsoxojasync-search-scrapergit---no-deps.txt
> **Note:** The library must be installed from the GitHub URL, not PyPI. The `--no-deps` flag is required because the library pins `bs4` (wrong package name); the real dependencies are already in `requirements.txt`.

### Linux (apt) fallback
If `pip` isn't available, install the system packages:

sudo apt-get install -y python3-requests python3-aiohttp python3-aiohttp-socks python3-bs4

sudo-apt-get-install--y-python3-requests-python3-aiohttp-python3-aiohttp-socks-python3-bs4.txt
## Usage

Run the search script with a query:

python3 web_multi_search.py "your search query"

python3-webmultisearchpy-your-search-query.txt
### Options

| Flag | Default | Description |
|------|---------|-------------|
| `--pages` | `3` | Number of result pages per engine |
| `--engines` | all working | Comma-separated list: `bing,yahoo,startpage,aol,ask` |
| `--proxy` | none | HTTP/SOCKS proxy URL |
| `--timeout` | `10` | HTTP timeout in seconds |
| `--output` | `json` | Output format: `json`, `csv`, `text` |
| `--unique-urls` | off | Deduplicate results by URL |
| `--unique-domains` | off | Deduplicate results by domain |

### Examples

python3 web_multi_search.py "privacy tools" --proxy socks5://127.0.0.1:9050

python3-webmultisearchpy-privacy-tools---proxy-socks51270019050.txt
### Output format

JSON output (default) returns an array of result objects:

]

.txt
### How the agent should use this

When you need to search the web for information:

1. Run the script with the user's query.
2. Parse the JSON output to extract relevant links and snippets.
3. Use the results to answer the user or to fetch specific pages for deeper reading.
example.sh
cd skills/web-multi-search
python3 -m pip install -r requirements.txt
python3 -m pip install git+https://github.com/soxoj/async-search-scraper.git --no-deps
example.sh
# Basic search, 3 pages per engine, JSON output
python3 web_multi_search.py "python async tutorial"

# Search only Bing and Yahoo, 5 pages each
python3 web_multi_search.py "machine learning" --engines bing,yahoo --pages 5

# Unique URLs only, CSV output
python3 web_multi_search.py "OpenClaw skills" --unique-urls --output csv

# Use a proxy
python3 web_multi_search.py "privacy tools" --proxy socks5://127.0.0.1:9050
example.json
[
  {
    "engine": "Bing",
    "host": "example.com",
    "link": "https://example.com/page",
    "title": "Page Title",
    "text": "Snippet of the page content..."
  }
]
example.sh
# Quick search and capture output
RESULTS=$(python3 /path/to/skills/web-multi-search/web_multi_search.py "query here" 2>/dev/null)
echo "$RESULTS" | python3 -c "import json,sys; data=json.load(sys.stdin); [print(r['link'], r['title']) for r in data[:10]]"

Tags

#web_and-frontend-development #web

Quick Info

Category Development
Model Claude 3.5
Complexity One-Click
Author orosha-ai
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install web-multi-search