✓ Verified 💻 Development ✓ Enhanced Data

Decodo Scraper Skill

Search Google, scrape web pages, Amazon product pages, YouTube subtitles, or Reddit (post/subreddit)

Rating
4.4 (378 reviews)
Downloads
776 downloads
Version
1.0.0

Overview

Search Google, scrape web pages, Amazon product pages, YouTube subtitles, or Reddit (post/subreddit)

Complete Documentation

View Source →

Decodo Scraper OpenClaw Skill

Use this skill to search Google, scrape any URL, or fetch YouTube subtitles via the Decodo Web Scraping API. Search outputs a JSON object of result sections; Scrape URL outputs plain markdown; Amazon and Amazon search output parsed product-page or search results (JSON). Amazon search uses --query. YouTube subtitles outputs transcript/subtitles. Reddit post and Reddit subreddit output post/listing content (JSON).

Authentication: Set DECODO_AUTH_TOKEN (Basic auth token from Decodo Dashboard → Scraping APIs) in your environment or in a .env file in the repo root.

Errors: On failure the script writes a JSON error to stderr and exits with code 1.


Tools

1. Search Google

Use this to find URLs, answers, or structured search results. The API returns a JSON object whose results key contains several sections (not all may be present for every query):

SectionDescription
organicMain search results (titles, links, snippets).
ai_overviewsAI-generated overviews or summaries when Google shows them.
paidPaid/sponsored results (ads).
related_questions“People also ask”–style questions and answers.
related_searchesSuggested related search queries.
discussions_and_forumsForum or discussion results (e.g. Reddit, Stack Exchange).
The script outputs only the inner results object (these sections); pagination info (page, last_visible_page, parse_status_code) is not included.

Command:

bash
python3 tools/scrape.py --target google_search --query "your search query"

Examples:

bash
python3 tools/scrape.py --target google_search --query "best laptops 2025"
python3 tools/scrape.py --target google_search --query "python requests tutorial"

Optional: --geo us or --locale en for location/language.


2. Scrape URL

Use this to get the content of a specific web page. By default the API returns content as Markdown (cleaner for LLMs and lower token usage).

Command:

bash
python3 tools/scrape.py --target universal --url "https://example.com"

Examples:

bash
python3 tools/scrape.py --target universal --url "https://example.com"
python3 tools/scrape.py --target universal --url "https://news.ycombinator.com/"


3. Amazon product page

Use this to get parsed data from an Amazon product (or other Amazon) page. Pass the product page URL as --url. The script sends parse: true and outputs the inner results object (e.g. ads, product details, etc.).

Command:

bash
python3 tools/scrape.py --target amazon --url "https://www.amazon.com/dp/PRODUCT_ID"

Examples:

bash
python3 tools/scrape.py --target amazon --url "https://www.amazon.com/dp/B09H74FXNW"


4. Amazon search

Use this to search Amazon and get parsed results (search results list, delivery_postcode, etc.). Pass the search query as --query.

Command:

bash
python3 tools/scrape.py --target amazon_search --query "your search query"

Examples:

bash
python3 tools/scrape.py --target amazon_search --query "laptop"


5. YouTube subtitles

Use this to get subtitles/transcript for a YouTube video. Pass the video ID (e.g. from youtube.com/watch?v=VIDEO_ID) as --query.

Command:

bash
python3 tools/scrape.py --target youtube_subtitles --query "VIDEO_ID"

Examples:

bash
python3 tools/scrape.py --target youtube_subtitles --query "dFu9aKJoqGg"


6. Reddit post

Use this to get the content of a Reddit post (thread). Pass the full post URL as --url.

Command:

bash
python3 tools/scrape.py --target reddit_post --url "https://www.reddit.com/r/SUBREDDIT/comments/ID/..."

Examples:

bash
python3 tools/scrape.py --target reddit_post --url "https://www.reddit.com/r/nba/comments/17jrqc5/serious_next_day_thread_postgame_discussion/"


7. Reddit subreddit

Use this to get the listing (posts) of a Reddit subreddit. Pass the subreddit URL as --url.

Command:

bash
python3 tools/scrape.py --target reddit_subreddit --url "https://www.reddit.com/r/SUBREDDIT/"

Examples:

bash
python3 tools/scrape.py --target reddit_subreddit --url "https://www.reddit.com/r/nba/"


Summary

ActionTargetArgumentExample command
Searchgoogle_search--querypython3 tools/scrape.py --target google_search --query "laptop"
Scrape pageuniversal--urlpython3 tools/scrape.py --target universal --url "https://example.com"
Amazon productamazon--urlpython3 tools/scrape.py --target amazon --url "https://www.amazon.com/dp/B09H74FXNW"
Amazon searchamazon_search--querypython3 tools/scrape.py --target amazon_search --query "laptop"
YouTube subtitlesyoutube_subtitles--querypython3 tools/scrape.py --target youtube_subtitles --query "dFu9aKJoqGg"
Reddit postreddit_post--urlpython3 tools/scrape.py --target reddit_post --url "https://www.reddit.com/r/nba/comments/17jrqc5/..."
Reddit subredditreddit_subreddit--urlpython3 tools/scrape.py --target reddit_subreddit --url "https://www.reddit.com/r/nba/"
Output: Search → JSON (sections). Scrape URL → markdown. Amazon / Amazon search → JSON (results e.g. ads, product info, delivery_postcode). YouTube → transcript. Reddit → JSON (content).

Installation

Terminal bash

openclaw install decodo-scraper-skill
    
Copied!

💻Code Examples

**Examples:**

examples.sh
python3 tools/scrape.py --target google_search --query "best laptops 2025"
python3 tools/scrape.py --target google_search --query "python requests tutorial"

**Examples:**

examples.sh
python3 tools/scrape.py --target universal --url "https://example.com"
python3 tools/scrape.py --target universal --url "https://news.ycombinator.com/"

Tags

#web_and-frontend-development #web

Quick Info

Category Development
Model Gemini 2.0
Complexity One-Click
Author donatasdecodo
Last Updated 3/10/2026
🚀
Optimized for
Gemini 2.0
💎

Ready to Install?

Get started with this skill in seconds

openclaw install decodo-scraper-skill