✓ Verified 💻 Development ✓ Enhanced Data

Brain Cms

Continuum Memory System (CMS) for OpenClaw agents.

Rating
3.9 (408 reviews)
Downloads
16,018 downloads
Version
1.0.0

Overview

Continuum Memory System (CMS) for OpenClaw agents.

Complete Documentation

View Source →

Brain CMS 🧠

A neuroscience-inspired memory architecture for OpenClaw agents. Replaces flat file injection with sparse, semantic, frequency-gated memory loading.

What This Installs

text
memory/
├── INDEX.md          ← Hippocampus: topic router + cross-links
├── ANCHORS.md        ← Permanent high-significance event store
└── schemas/          ← Domain-specific semantic schemas (you create these)

memory_brain/
├── index_memory.py   ← Embeds schemas into LanceDB vector store
├── query_memory.py   ← Semantic similarity search
├── nrem.py           ← NREM sleep cycle (compression + anchor promotion)
├── rem.py            ← REM sleep cycle (LLM consolidation via Ollama)
└── vectorstore/      ← LanceDB database (auto-created)

Setup (one-time)

bash
# 1. Run the installer
python3 ~/.openclaw/workspace/skills/brain-cms/install.py

# 2. Index your schemas
cd ~/.openclaw/workspace/memory_brain
.venv/bin/python3 index_memory.py

# 3. Test retrieval
.venv/bin/python3 query_memory.py "your topic here" --sources-only

How It Works

Boot sequence: Load MEMORY.md (lean core) + today's daily log. Nothing else.

When a topic appears: Read memory/INDEX.md → load only the relevant schemas (spreading activation). Check memory/ANCHORS.md for high-significance events.

For ambiguous topics: Run semantic search:

bash
memory_brain/.venv/bin/python3 memory_brain/query_memory.py "message text" --sources-only

Auto-schema creation: When a new significant project or domain appears:

  • Create memory/.md
  • Add to INDEX.md with triggers + priority + cross-links
  • Re-index: memory_brain/.venv/bin/python3 memory_brain/index_memory.py
Sleep cycles:
bash
# NREM — run on shutdown (~30s, no LLM)
cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 nrem.py

# REM — run weekly (2-5 min, uses local llama3.2:3b, free)
cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 rem.py

Memory Layers (CMS)

LayerFilesWhen loadedPurpose
WorkingMEMORY.md + today logEvery sessionCore context
Episodicmemory/YYYY-MM-DD.mdSession bootRecent events
Semanticmemory/*.md schemasOn triggerDomain knowledge
Anchorsmemory/ANCHORS.mdOn CRITICAL topicsPermanent ground truth
Vectormemory_brain/vectorstore/On demandSemantic search

Tagging Anchors

In any daily log, tag high-significance events:
text
[ANCHOR] Major demo success — full pipeline working end-to-end
NREM auto-promotes these to ANCHORS.md on next shutdown.

Token Savings

Typical MEMORY.md: 150-300 lines injected every session. With Brain CMS: ~50-line core + schemas loaded only when relevant. Estimated savings: 40-60% reduction in context tokens per session.

Requirements

  • Python 3.10+
  • Ollama (for embeddings + REM consolidation)
  • 500MB+ storage for vector store and models
  • lancedb, numpy, pyarrow, requests (auto-installed)

Installation

Terminal bash

openclaw install brain-cms
    
Copied!

💻Code Examples

.venv/bin/python3 query_memory.py "your topic here" --sources-only

venvbinpython3-querymemorypy-your-topic-here---sources-only.txt
## How It Works

**Boot sequence:** Load MEMORY.md (lean core) + today's daily log. Nothing else.

**When a topic appears:** Read `memory/INDEX.md` → load only the relevant schemas (spreading activation). Check `memory/ANCHORS.md` for high-significance events.

**For ambiguous topics:** Run semantic search:

memory_brain/.venv/bin/python3 memory_brain/query_memory.py "message text" --sources-only

memorybrainvenvbinpython3-memorybrainquerymemorypy-message-text---sources-only.txt
**Auto-schema creation:** When a new significant project or domain appears:
1. Create `memory/<topic>.md`
2. Add to INDEX.md with triggers + priority + cross-links
3. Re-index: `memory_brain/.venv/bin/python3 memory_brain/index_memory.py`

**Sleep cycles:**

cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 rem.py

cd-openclawworkspacememorybrain--venvbinpython3-rempy.txt
## Memory Layers (CMS)

| Layer | Files | When loaded | Purpose |
|-------|-------|-------------|---------|
| Working | `MEMORY.md` + today log | Every session | Core context |
| Episodic | `memory/YYYY-MM-DD.md` | Session boot | Recent events |
| Semantic | `memory/*.md` schemas | On trigger | Domain knowledge |
| Anchors | `memory/ANCHORS.md` | On CRITICAL topics | Permanent ground truth |
| Vector | `memory_brain/vectorstore/` | On demand | Semantic search |

## Tagging Anchors
In any daily log, tag high-significance events:
example.txt
memory/
├── INDEX.md          ← Hippocampus: topic router + cross-links
├── ANCHORS.md        ← Permanent high-significance event store
└── schemas/          ← Domain-specific semantic schemas (you create these)

memory_brain/
├── index_memory.py   ← Embeds schemas into LanceDB vector store
├── query_memory.py   ← Semantic similarity search
├── nrem.py           ← NREM sleep cycle (compression + anchor promotion)
├── rem.py            ← REM sleep cycle (LLM consolidation via Ollama)
└── vectorstore/      ← LanceDB database (auto-created)
example.sh
# 1. Run the installer
python3 ~/.openclaw/workspace/skills/brain-cms/install.py

# 2. Index your schemas
cd ~/.openclaw/workspace/memory_brain
.venv/bin/python3 index_memory.py

# 3. Test retrieval
.venv/bin/python3 query_memory.py "your topic here" --sources-only
example.sh
# NREM — run on shutdown (~30s, no LLM)
cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 nrem.py

# REM — run weekly (2-5 min, uses local llama3.2:3b, free)
cd ~/.openclaw/workspace/memory_brain && .venv/bin/python3 rem.py

Tags

#coding_agents-and-ides

Quick Info

Category Development
Model Claude 3.5
Complexity Multi-Agent
Author harrey401
Last Updated 3/10/2026
🚀
Optimized for
Claude 3.5
🧠

Ready to Install?

Get started with this skill in seconds

openclaw install brain-cms