✓ Verified 💻 Development ✓ Enhanced Data

Cut Your Tokens 97percent Savings On Session Transcripts Via Observation Extraction

Claw Compactor v6.0 — 50%+ savings through rule-based compression, dictionary encoding, session obse

Rating
4.4 (167 reviews)
Downloads
22,405 downloads
Version
1.0.0

Overview

Claw Compactor v6.0 — 50%+ savings through rule-based compression, dictionary encoding, session observation.

Key Features

1

5 compression layers working in sequence for maximum savings

2

Zero LLM cost — all compression is rule-based and deterministic

3

Lossless roundtrip for dictionary, RLE, and rule-based compression

4

~97% savings on session transcripts via observation extraction

5

Tiered summaries (L0/L1/L2) for progressive context loading

6

CJK-aware — full Chinese/Japanese/Korean support

7

One command (full) runs everything in optimal order

Complete Documentation

View Source →

🦞 Claw Compactor

!Claw Compactor Banner

"Cut your tokens. Keep your facts."

Cut your AI agent's token spend in half. One command compresses your entire workspace — memory files, session transcripts, sub-agent context — using 5 layered compression techniques. Deterministic. Mostly lossless. No LLM required.

Features

  • 5 compression layers working in sequence for maximum savings
  • Zero LLM cost — all compression is rule-based and deterministic
  • Lossless roundtrip for dictionary, RLE, and rule-based compression
  • ~97% savings on session transcripts via observation extraction
  • Tiered summaries (L0/L1/L2) for progressive context loading
  • CJK-aware — full Chinese/Japanese/Korean support
  • One command (full) runs everything in optimal order

5 Compression Layers

#LayerMethodSavingsLossless?
1Rule engineDedup lines, strip markdown filler, merge sections4-8%
2Dictionary encodingAuto-learned codebook, $XX substitution4-5%
3Observation compressionSession JSONL → structured summaries~97%
4RLE patternsPath shorthand ($WS), IP prefix, enum compaction1-2%
5Compressed Context Protocolultra/medium/light abbreviation20-60%
\*Lossy techniques preserve all facts and decisions; only verbose formatting is removed.

Quick Start

bash
git clone https://github.com/aeromomo/claw-compactor.git
cd claw-compactor

# See how much you'd save (non-destructive)
python3 scripts/mem_compress.py /path/to/workspace benchmark

# Compress everything
python3 scripts/mem_compress.py /path/to/workspace full

Requirements: Python 3.9+. Optional: pip install tiktoken for exact token counts (falls back to heuristic).

Architecture

text
┌─────────────────────────────────────────────────────────────┐
│                      mem_compress.py                        │
│                   (unified entry point)                     │
└──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬────┘
       │      │      │      │      │      │      │      │
       ▼      ▼      ▼      ▼      ▼      ▼      ▼      ▼
  estimate compress  dict  dedup observe tiers  audit optimize
       └──────┴──────┴──┬───┴──────┴──────┴──────┴──────┘
                        ▼
                  ┌────────────────┐
                  │     lib/       │
                  │ tokens.py      │ ← tiktoken or heuristic
                  │ markdown.py    │ ← section parsing
                  │ dedup.py       │ ← shingle hashing
                  │ dictionary.py  │ ← codebook compression
                  │ rle.py         │ ← path/IP/enum encoding
                  │ tokenizer_     │
                  │   optimizer.py │ ← format optimization
                  │ config.py      │ ← JSON config
                  │ exceptions.py  │ ← error types
                  └────────────────┘

Commands

All commands: python3 scripts/mem_compress.py [options]

CommandDescriptionTypical Savings
fullComplete pipeline (all steps in order)50%+ combined
benchmarkDry-run performance report
compressRule-based compression4-8%
dictDictionary encoding with auto-codebook4-5%
observeSession transcript → observations~97%
tiersGenerate L0/L1/L2 summaries88-95% on sub-agent loads
dedupCross-file duplicate detectionvaries
estimateToken count report
auditWorkspace health check
optimizeTokenizer-level format fixes1-3%

Global Options

  • --json — Machine-readable JSON output
  • --dry-run — Preview changes without writing
  • --since YYYY-MM-DD — Filter sessions by date
  • --auto-merge — Auto-merge duplicates (dedup)

Real-World Savings

Workspace StateTypical SavingsNotes
Session transcripts (observe)~97%Megabytes of JSONL → concise observation MD
Verbose/new workspace50-70%First run on unoptimized workspace
Regular maintenance10-20%Weekly runs on active workspace
Already-optimized3-12%Diminishing returns — workspace is clean

cacheRetention — Complementary Optimization

Before compression runs, enable prompt caching for a 90% discount on cached tokens:

json
{
  "models": {
    "model-name": {
      "cacheRetention": "long"
    }
  }
}

Compression reduces token count, caching reduces cost-per-token. Together: 50% compression + 90% cache discount = 95% effective cost reduction.

Heartbeat Automation

Run weekly or on heartbeat:

markdown
## Memory Maintenance (weekly)
- python3 skills/claw-compactor/scripts/mem_compress.py <workspace> benchmark
- If savings > 5%: run full pipeline
- If pending transcripts: run observe

Cron example:

text
0 3 * * 0 cd /path/to/skills/claw-compactor && python3 scripts/mem_compress.py /path/to/workspace full

Configuration

Optional claw-compactor-config.json in workspace root:

json
{
  "chars_per_token": 4,
  "level0_max_tokens": 200,
  "level1_max_tokens": 500,
  "dedup_similarity_threshold": 0.6,
  "dedup_shingle_size": 3
}

All fields optional — sensible defaults are used when absent.

Artifacts

FilePurpose
memory/.codebook.jsonDictionary codebook (must travel with memory files)
memory/.observed-sessions.jsonTracks processed transcripts
memory/observations/Compressed session summaries
memory/MEMORY-L0.mdLevel 0 summary (~200 tokens)

FAQ

Q: Will compression lose my data? A: Rule engine, dictionary, RLE, and tokenizer optimization are fully lossless. Observation compression and CCP are lossy but preserve all facts and decisions.

Q: How does dictionary decompression work? A: decompress_text(text, codebook) expands all $XX codes back. The codebook JSON must be present.

Q: Can I run individual steps? A: Yes. Every command is independent: compress, dict, observe, tiers, dedup, optimize.

Q: What if tiktoken isn't installed? A: Falls back to a CJK-aware heuristic (chars÷4). Results are ~90% accurate.

Q: Does it handle Chinese/Japanese/Unicode? A: Yes. Full CJK support including character-aware token estimation and Chinese punctuation normalization.

Troubleshooting

  • FileNotFoundError on workspace: Ensure path points to workspace root (contains memory/ or MEMORY.md)
  • Dictionary decompression fails: Check memory/.codebook.json exists and is valid JSON
  • Zero savings on benchmark: Workspace is already optimized — nothing to do
  • observe finds no transcripts: Check sessions directory for .jsonl files
  • Token count seems wrong: Install tiktoken: pip3 install tiktoken

Credits

License

MIT

Installation

Terminal bash

openclaw install cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction
    
Copied!

💻Code Examples

python3 scripts/mem_compress.py /path/to/workspace full

python3-scriptsmemcompresspy-pathtoworkspace-full.txt
**Requirements:** Python 3.9+. Optional: `pip install tiktoken` for exact token counts (falls back to heuristic).

## Architecture

└────────────────┘

-.txt
## Commands

All commands: `python3 scripts/mem_compress.py <workspace> <command> [options]`

| Command | Description | Typical Savings |
|---------|-------------|-----------------|
| `full` | Complete pipeline (all steps in order) | 50%+ combined |
| `benchmark` | Dry-run performance report | — |
| `compress` | Rule-based compression | 4-8% |
| `dict` | Dictionary encoding with auto-codebook | 4-5% |
| `observe` | Session transcript → observations | ~97% |
| `tiers` | Generate L0/L1/L2 summaries | 88-95% on sub-agent loads |
| `dedup` | Cross-file duplicate detection | varies |
| `estimate` | Token count report | — |
| `audit` | Workspace health check | — |
| `optimize` | Tokenizer-level format fixes | 1-3% |

### Global Options
- `--json` — Machine-readable JSON output
- `--dry-run` — Preview changes without writing
- `--since YYYY-MM-DD` — Filter sessions by date
- `--auto-merge` — Auto-merge duplicates (dedup)

## Real-World Savings

| Workspace State | Typical Savings | Notes |
|---|---|---|
| Session transcripts (observe) | **~97%** | Megabytes of JSONL → concise observation MD |
| Verbose/new workspace | **50-70%** | First run on unoptimized workspace |
| Regular maintenance | **10-20%** | Weekly runs on active workspace |
| Already-optimized | **3-12%** | Diminishing returns — workspace is clean |

## cacheRetention — Complementary Optimization

Before compression runs, enable **prompt caching** for a 90% discount on cached tokens:

}

.txt
Compression reduces token count, caching reduces cost-per-token. Together: 50% compression + 90% cache discount = **95% effective cost reduction**.

## Heartbeat Automation

Run weekly or on heartbeat:

0 3 * * 0 cd /path/to/skills/claw-compactor && python3 scripts/mem_compress.py /path/to/workspace full

0-3---0-cd-pathtoskillsclaw-compactor--python3-scriptsmemcompresspy-pathtoworkspace-full.txt
## Configuration

Optional `claw-compactor-config.json` in workspace root:
example.sh
git clone https://github.com/aeromomo/claw-compactor.git
cd claw-compactor

# See how much you'd save (non-destructive)
python3 scripts/mem_compress.py /path/to/workspace benchmark

# Compress everything
python3 scripts/mem_compress.py /path/to/workspace full
example.txt
┌─────────────────────────────────────────────────────────────┐
│                      mem_compress.py                        │
│                   (unified entry point)                     │
└──────┬──────┬──────┬──────┬──────┬──────┬──────┬──────┬────┘
       │      │      │      │      │      │      │      │
       ▼      ▼      ▼      ▼      ▼      ▼      ▼      ▼
  estimate compress  dict  dedup observe tiers  audit optimize
       └──────┴──────┴──┬───┴──────┴──────┴──────┴──────┘
                        ▼
                  ┌────────────────┐
                  │     lib/       │
                  │ tokens.py      │ ← tiktoken or heuristic
                  │ markdown.py    │ ← section parsing
                  │ dedup.py       │ ← shingle hashing
                  │ dictionary.py  │ ← codebook compression
                  │ rle.py         │ ← path/IP/enum encoding
                  │ tokenizer_     │
                  │   optimizer.py │ ← format optimization
                  │ config.py      │ ← JSON config
                  │ exceptions.py  │ ← error types
                  └────────────────┘
example.json
{
  "models": {
    "model-name": {
      "cacheRetention": "long"
    }
  }
}
example.md
## Memory Maintenance (weekly)
- python3 skills/claw-compactor/scripts/mem_compress.py <workspace> benchmark
- If savings > 5%: run full pipeline
- If pending transcripts: run observe
example.json
{
  "chars_per_token": 4,
  "level0_max_tokens": 200,
  "level1_max_tokens": 500,
  "dedup_similarity_threshold": 0.6,
  "dedup_shingle_size": 3
}

Tags

#coding_agents-and-ides #script

Quick Info

Category Development
Model Claude 3.5
Complexity One-Click
Author aeromomo
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install cut-your-tokens-97percent-savings-on-session-transcripts-via-observation-extraction