Overview
fmem is a local-first memory system that makes AI conversations feel natural and continuous. It remembers the precise context you need. Not entire documents or isolated keywords, but the meaningful chunks that matter.
Core Innovation: Hybrid chunking splits documents intelligently. It uses 800-char chunks with table-aware atomic splitting and heading boundary detection—no arbitrary token limits.
Note: This is one approach among many (LlamaIndex, Chroma, simple vector stores). fmem was built specifically for Raspberry Pi — minimal dependencies, local-first, runs on constrained hardware.
Prerequisites
- Python 3.9+
- pip
- ~22MB for model weights (downloaded on first run)
Key Features
| Feature | Description |
|---|---|
| Chunk-level indexing | Retrieves relevant sections, not whole files |
| Multi-factor ranking | Semantic (50%) + Recency (30%) + Location (20%) |
| Zero external APIs | All embeddings run locally via FastEmbed. First run downloads a ~22MB model. |
| Table-aware chunking | Tables treated as atomic units |
| Privacy-first | Your memory stays on your machine |
How It Works
|
|
Multi-Factor Ranking:
- Semantic (50%): FAISS vector similarity
- Recency (30%): Time-based decay on file modification
- Location (20%): Directory importance (docs > notes > chats)
Performance
- Indexing: ~1.5s per 7KB markdown file
- Search: Sub-100ms for typical queries
- Memory: Model size ~22MB, fits on embedded systems
Quick Start
|
|
Minimal config (~/.openclaw/memory/fmem.conf):
|
|
Use Cases
- Personal AI assistants — Remember context across sessions
- Exploratory work — Index daily notes; fmem recalls specific moments, like “that auth flow idea from last Tuesday”
- Knowledge management — Semantic search over notes, decisions, documents
Technical Details
- Embedding: all-minilm:22m via FastEmbed (local, no API)
- Chunk size: 800 characters (model-constrained for 512 tokens)
- Index: FAISS for fast similarity search
- License: MIT
Links
- GitHub: github.com/LuisEduardoAvila/fmem
- Documentation: Architecture · Examples
- Status: v1 Stable (v3.1.0)