Skip to content

MCP Server Setup

YantrikDB MCP gives any MCP-compatible AI agent persistent cognitive memory across sessions. Install once, add 3 lines of config, and your agent auto-recalls context, auto-remembers decisions, and auto-detects contradictions — no prompting needed.

Terminal window
pip install yantrikdb-mcp

Add to your MCP client’s configuration:

{
"mcpServers": {
"yantrikdb": {
"command": "yantrikdb-mcp"
}
}
}

Same format — add the yantrikdb server to your MCP settings. The server communicates via stdio, compatible with any MCP client.

VariableDefaultDescription
YANTRIKDB_DB_PATH~/.yantrikdb/memory.dbDatabase file path
YANTRIKDB_EMBEDDING_MODELall-MiniLM-L6-v2Sentence-transformers model
YANTRIKDB_EMBEDDING_DIM384Embedding dimensions

The MCP server exposes 15 cognitive memory tools. Many use an action parameter to group related operations into a single tool.

ToolActions / Description
rememberStore a memory with importance, domain, valence, certainty, and source
recallSearch memories by semantic similarity with filters (domain, source, type). Includes confidence calibration and certainty reasons
forgetTombstone a memory permanently
correctFix an incorrect memory (preserves history, transfers relationships)
memoryget — retrieve by ID, list — browse with filters, update_importance — adjust score, archive — cold storage, hydrate — restore
ToolActions / Description
graphrelate — create entity relationships, edges — get relationships, search — find entities, profile — entity intelligence, depth — relationship depth score
ToolActions / Description
thinkRun consolidation + conflict detection + pattern mining + substitution scanning + gossip triggers
conflictlist — detected contradictions, get — details, resolve — keep_a/keep_b/merge/keep_both, dismiss — close without resolving, reclassify — change type and teach substitution categories, scan — force conflict detection
triggerpending — undelivered insights, deliver/acknowledge/act/dismiss — lifecycle management, history — past triggers
ToolActions / Description
categorylist — all categories with member counts, members — inspect a category, learn — teach new members, reset — revert to seed vocabulary

Categories contain interchangeable terms (PostgreSQL, MySQL, MariaDB → “databases”). When two memories differ only by a substitution, it’s flagged as a real conflict instead of redundancy.

Seed categories (8 built-in): databases, cloud_providers, programming_languages, frameworks, roles, infrastructure, editors_tools, llm_providers (~80 terms total).

Learning loop: seed → user corrections via reclassify → LLM suggestions via learn → categories grow over time.

ToolActions / Description
sessionstart — begin conversation session, end — close with summary, active — current session, history — past sessions, cleanup — abandon stale sessions
temporalstale — memories needing verification, upcoming — time-relevant memories
ToolActions / Description
procedurerecord — save a strategy/approach, surface — retrieve relevant procedures, reinforce — update effectiveness, stats — effectiveness by domain

Self-improving memory: tracks what strategies work and adapts over time using EMA-based scoring.

ToolActions / Description
personalityget — current personality traits, derive — recalculate from memories, set — override a trait
statsDatabase statistics: memory counts, entities, edges, conflicts, patterns

The server includes built-in instructions that teach the agent when and how to use memory:

  1. Auto-recall — at conversation start, the agent searches memory for relevant context
  2. Auto-remember — decisions, preferences, people, and project context are stored automatically
  3. Auto-relate — entity relationships are created as they’re discovered
  4. Consolidationthink() merges redundant memories, detects contradictions, mines patterns
  5. Substitution detection — PostgreSQL vs MySQL flagged as a real conflict, not redundancy
  6. Correction — when the user corrects a fact, the old memory is tombstoned and a corrected version created
  7. Feedback learning — reclassifying conflicts teaches the system new vocabulary

File-based approaches (CLAUDE.md, memory files) load everything into context every conversation. YantrikDB recalls only what’s relevant.

MemoriesFile-BasedYantrikDBSavings
1001,770 tokens69 tokens96%
5009,807 tokens72 tokens99.3%
1,00019,988 tokens72 tokens99.6%
5,000101,739 tokens53 tokens99.9%

Selective recall cost is O(1). File-based is O(n). At 500 memories, file-based exceeds 32K context windows. At 5,000, it doesn’t fit anywhere. YantrikDB stays at ~70 tokens with precision that improves as you add more memories.

Run the benchmark: python benchmarks/bench_token_savings.py