MCP Server Setup
YantrikDB MCP gives any MCP-compatible AI agent persistent cognitive memory across sessions. Install once, add 3 lines of config, and your agent auto-recalls context, auto-remembers decisions, and auto-detects contradictions — no prompting needed.
Installation
Section titled “Installation”pip install yantrikdb-mcpConfiguration
Section titled “Configuration”Add to your MCP client’s configuration:
Claude Code (~/.claude/mcp.json)
Section titled “Claude Code (~/.claude/mcp.json)”{ "mcpServers": { "yantrikdb": { "command": "yantrikdb-mcp" } }}Cursor / Windsurf / Copilot / Kilo Code
Section titled “Cursor / Windsurf / Copilot / Kilo Code”Same format — add the yantrikdb server to your MCP settings. The server communicates via stdio, compatible with any MCP client.
Environment Variables
Section titled “Environment Variables”| Variable | Default | Description |
|---|---|---|
YANTRIKDB_DB_PATH | ~/.yantrikdb/memory.db | Database file path |
YANTRIKDB_EMBEDDING_MODEL | all-MiniLM-L6-v2 | Sentence-transformers model |
YANTRIKDB_EMBEDDING_DIM | 384 | Embedding dimensions |
Available Tools
Section titled “Available Tools”The MCP server exposes 15 cognitive memory tools. Many use an action parameter to group related operations into a single tool.
Core Memory
Section titled “Core Memory”| Tool | Actions / Description |
|---|---|
remember | Store a memory with importance, domain, valence, certainty, and source |
recall | Search memories by semantic similarity with filters (domain, source, type). Includes confidence calibration and certainty reasons |
forget | Tombstone a memory permanently |
correct | Fix an incorrect memory (preserves history, transfers relationships) |
memory | get — retrieve by ID, list — browse with filters, update_importance — adjust score, archive — cold storage, hydrate — restore |
Knowledge Graph
Section titled “Knowledge Graph”| Tool | Actions / Description |
|---|---|
graph | relate — create entity relationships, edges — get relationships, search — find entities, profile — entity intelligence, depth — relationship depth score |
Cognition & Conflicts
Section titled “Cognition & Conflicts”| Tool | Actions / Description |
|---|---|
think | Run consolidation + conflict detection + pattern mining + substitution scanning + gossip triggers |
conflict | list — detected contradictions, get — details, resolve — keep_a/keep_b/merge/keep_both, dismiss — close without resolving, reclassify — change type and teach substitution categories, scan — force conflict detection |
trigger | pending — undelivered insights, deliver/acknowledge/act/dismiss — lifecycle management, history — past triggers |
Substitution Categories (V14)
Section titled “Substitution Categories (V14)”| Tool | Actions / Description |
|---|---|
category | list — all categories with member counts, members — inspect a category, learn — teach new members, reset — revert to seed vocabulary |
Categories contain interchangeable terms (PostgreSQL, MySQL, MariaDB → “databases”). When two memories differ only by a substitution, it’s flagged as a real conflict instead of redundancy.
Seed categories (8 built-in): databases, cloud_providers, programming_languages, frameworks, roles, infrastructure, editors_tools, llm_providers (~80 terms total).
Learning loop: seed → user corrections via reclassify → LLM suggestions via learn → categories grow over time.
Sessions & Temporal
Section titled “Sessions & Temporal”| Tool | Actions / Description |
|---|---|
session | start — begin conversation session, end — close with summary, active — current session, history — past sessions, cleanup — abandon stale sessions |
temporal | stale — memories needing verification, upcoming — time-relevant memories |
Procedural Memory
Section titled “Procedural Memory”| Tool | Actions / Description |
|---|---|
procedure | record — save a strategy/approach, surface — retrieve relevant procedures, reinforce — update effectiveness, stats — effectiveness by domain |
Self-improving memory: tracks what strategies work and adapts over time using EMA-based scoring.
Personality & Stats
Section titled “Personality & Stats”| Tool | Actions / Description |
|---|---|
personality | get — current personality traits, derive — recalculate from memories, set — override a trait |
stats | Database statistics: memory counts, entities, edges, conflicts, patterns |
How It Works
Section titled “How It Works”The server includes built-in instructions that teach the agent when and how to use memory:
- Auto-recall — at conversation start, the agent searches memory for relevant context
- Auto-remember — decisions, preferences, people, and project context are stored automatically
- Auto-relate — entity relationships are created as they’re discovered
- Consolidation —
think()merges redundant memories, detects contradictions, mines patterns - Substitution detection — PostgreSQL vs MySQL flagged as a real conflict, not redundancy
- Correction — when the user corrects a fact, the old memory is tombstoned and a corrected version created
- Feedback learning — reclassifying conflicts teaches the system new vocabulary
Why Not File-Based Memory?
Section titled “Why Not File-Based Memory?”File-based approaches (CLAUDE.md, memory files) load everything into context every conversation. YantrikDB recalls only what’s relevant.
| Memories | File-Based | YantrikDB | Savings |
|---|---|---|---|
| 100 | 1,770 tokens | 69 tokens | 96% |
| 500 | 9,807 tokens | 72 tokens | 99.3% |
| 1,000 | 19,988 tokens | 72 tokens | 99.6% |
| 5,000 | 101,739 tokens | 53 tokens | 99.9% |
Selective recall cost is O(1). File-based is O(n). At 500 memories, file-based exceeds 32K context windows. At 5,000, it doesn’t fit anywhere. YantrikDB stays at ~70 tokens with precision that improves as you add more memories.
Run the benchmark: python benchmarks/bench_token_savings.py