✓ Verified 📊 Data Processing ✓ Enhanced Data

Duckdb En

DuckDB CLI specialist for SQL analysis, data processing.

Rating
3.9 (335 reviews)
Downloads
4,687 downloads
Version
1.0.0

Overview

DuckDB CLI specialist for SQL analysis, data processing.

Complete Documentation

View Source →

DuckDB CLI Specialist

Helps with data analysis, SQL queries and file conversion via DuckDB CLI.

Quick Start

Read data files directly with SQL

bash
# CSV
duckdb -c "SELECT * FROM 'data.csv' LIMIT 10"

# Parquet
duckdb -c "SELECT * FROM 'data.parquet'"

# Multiple files with glob
duckdb -c "SELECT * FROM read_parquet('logs/*.parquet')"

# JSON
duckdb -c "SELECT * FROM read_json_auto('data.json')"

Open persistent databases

bash
# Create/open database
duckdb my_database.duckdb

# Read-only mode
duckdb -readonly existing.duckdb

Command Line Arguments

Output formats (as flags)

FlagFormat
-csvComma-separated
-jsonJSON array
-tableASCII table
-markdownMarkdown table
-htmlHTML table
-lineOne value per line

Execution arguments

ArgumentDescription
-c COMMANDRun SQL and exit
-f FILENAMERun script from file
-init FILEUse alternative to ~/.duckdbrc
-readonlyOpen in read-only mode
-echoShow commands before execution
-bailStop on first error
-header / -noheaderShow/hide column headers
-nullvalue TEXTText for NULL values
-separator SEPColumn separator

Data Conversion

CSV to Parquet

bash
duckdb -c "COPY (SELECT * FROM 'input.csv') TO 'output.parquet' (FORMAT PARQUET)"

Parquet to CSV

bash
duckdb -c "COPY (SELECT * FROM 'input.parquet') TO 'output.csv' (HEADER, DELIMITER ',')"

JSON to Parquet

bash
duckdb -c "COPY (SELECT * FROM read_json_auto('input.json')) TO 'output.parquet' (FORMAT PARQUET)"

Convert with filtering

bash
duckdb -c "COPY (SELECT * FROM 'data.csv' WHERE amount > 1000) TO 'filtered.parquet' (FORMAT PARQUET)"

Dot Commands

Schema inspection

CommandDescription
.tables [pattern]Show tables (with LIKE pattern)
.schema [table]Show CREATE statements
.databasesShow attached databases

Output control

CommandDescription
.mode FORMATChange output format
.output fileSend output to file
.once fileNext output to file
.headers on/offShow/hide column headers
.separator COL ROWSet separators

Queries

CommandDescription
.timer on/offShow execution time
.echo on/offShow commands before execution
.bail on/offStop on error
.read file.sqlRun SQL from file

Editing

CommandDescription
.edit or \eOpen query in external editor
.help [pattern]Show help

Output Formats (18 available)

Data export

  • csv - Comma-separated for spreadsheets
  • tabs - Tab-separated
  • json - JSON array
  • jsonlines - Newline-delimited JSON (streaming)

Readable formats

  • duckbox (default) - Pretty ASCII with unicode box-drawing
  • table - Simple ASCII table
  • markdown - For documentation
  • html - HTML table
  • latex - For academic papers

Specialized

  • insert TABLE - SQL INSERT statements
  • column - Columns with adjustable width
  • line - One value per line
  • list - Pipe-separated
  • trash - Discard output

Keyboard Shortcuts (macOS/Linux)

Navigation

ShortcutAction
Home / EndStart/end of line
Ctrl+Left/RightJump word
Ctrl+A / Ctrl+EStart/end of buffer

History

ShortcutAction
Ctrl+P / Ctrl+NPrevious/next command
Ctrl+RSearch history
Alt+< / Alt+>First/last in history

Editing

ShortcutAction
Ctrl+WDelete word backward
Alt+DDelete word forward
Alt+U / Alt+LUppercase/lowercase word
Ctrl+KDelete to end of line

Autocomplete

ShortcutAction
TabAutocomplete / next suggestion
Shift+TabPrevious suggestion
Esc+EscUndo autocomplete

Autocomplete

Context-aware autocomplete activated with Tab:

  • Keywords - SQL commands
  • Table names - Database objects
  • Column names - Fields and functions
  • File names - Path completion

Database Operations

Create table from file

sql
CREATE TABLE sales AS SELECT * FROM 'sales_2024.csv';

Insert data

sql
INSERT INTO sales SELECT * FROM 'sales_2025.csv';

Export table

sql
COPY sales TO 'backup.parquet' (FORMAT PARQUET);

Analysis Examples

Quick statistics

sql
SELECT
    COUNT(*) as count,
    AVG(amount) as average,
    SUM(amount) as total
FROM 'transactions.csv';

Grouping

sql
SELECT
    category,
    COUNT(*) as count,
    SUM(amount) as total
FROM 'data.csv'
GROUP BY category
ORDER BY total DESC;

Join on files

sql
SELECT a.*, b.name
FROM 'orders.csv' a
JOIN 'customers.parquet' b ON a.customer_id = b.id;

Describe data

sql
DESCRIBE SELECT * FROM 'data.csv';

Pipe and stdin

bash
# Read from stdin
cat data.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"

# Pipe to another command
duckdb -csv -c "SELECT * FROM 'data.parquet'" | head -20

# Write to stdout
duckdb -c "COPY (SELECT * FROM 'data.csv') TO '/dev/stdout' (FORMAT CSV)"

Configuration

Save common settings in ~/.duckdbrc:

sql
.timer on
.mode duckbox
.maxrows 50
.highlight on

Syntax highlighting colors

sql
.keyword green
.constant yellow
.comment brightblack
.error red

External Editor

Open complex queries in your editor:

sql
.edit

Editor is chosen from: DUCKDB_EDITOREDITORVISUALvi

Safe Mode

Secure mode that restricts file access. When enabled:

  • No external file access
  • Disables .read, .output, .import, .sh etc.
  • Cannot be disabled in the same session

Tips

  • Use LIMIT on large files for quick preview
  • Parquet is faster than CSV for repeated queries
  • read_csv_auto and read_json_auto guess column types
  • Arguments are processed in order (like SQLite CLI)
  • WSL2 may show incorrect memory_limit values on some Ubuntu versions

Installation

Terminal bash

openclaw install duckdb-en
    
Copied!

💻Code Examples

### Read data files directly with SQL

-read-data-files-directly-with-sql.sh
# CSV
duckdb -c "SELECT * FROM 'data.csv' LIMIT 10"

# Parquet
duckdb -c "SELECT * FROM 'data.parquet'"

# Multiple files with glob
duckdb -c "SELECT * FROM read_parquet('logs/*.parquet')"

# JSON
duckdb -c "SELECT * FROM read_json_auto('data.json')"

### Open persistent databases

-open-persistent-databases.sh
# Create/open database
duckdb my_database.duckdb

# Read-only mode
duckdb -readonly existing.duckdb

### Quick statistics

-quick-statistics.sql
SELECT
    COUNT(*) as count,
    AVG(amount) as average,
    SUM(amount) as total
FROM 'transactions.csv';

### Grouping

-grouping.sql
SELECT
    category,
    COUNT(*) as count,
    SUM(amount) as total
FROM 'data.csv'
GROUP BY category
ORDER BY total DESC;

### Join on files

-join-on-files.sql
SELECT a.*, b.name
FROM 'orders.csv' a
JOIN 'customers.parquet' b ON a.customer_id = b.id;

duckdb -c "COPY (SELECT * FROM 'data.csv') TO '/dev/stdout' (FORMAT CSV)"

duckdb--c-copy-select--from-datacsv-to-devstdout-format-csv.txt
## Configuration

Save common settings in `~/.duckdbrc`:

.error red

error-red.txt
## External Editor

Open complex queries in your editor:
example.sh
# Read from stdin
cat data.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"

# Pipe to another command
duckdb -csv -c "SELECT * FROM 'data.parquet'" | head -20

# Write to stdout
duckdb -c "COPY (SELECT * FROM 'data.csv') TO '/dev/stdout' (FORMAT CSV)"
example.sql
.timer on
.mode duckbox
.maxrows 50
.highlight on
example.sql
.keyword green
.constant yellow
.comment brightblack
.error red

Tags

#data_and-analytics #cli #data

Quick Info

Category Data Processing
Model Claude 3.5
Complexity One-Click
Author camelsprout
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install duckdb-en