Python API
YantrikDB
Section titled “YantrikDB”The main database class.
Constructor
Section titled “Constructor”db = YantrikDB( db_path="memory.db", embedding_dim=384, embedder=None, # optional: sentence_transformers model)| Parameter | Type | Default | Description |
|---|---|---|---|
db_path | str | "memory.db" | Path to the database file (:memory: for in-memory) |
embedding_dim | int | 384 | Embedding vector dimensions |
embedder | object | None | A SentenceTransformer model instance for auto-embedding |
Core Memory
Section titled “Core Memory”record(text, **kwargs) → str
Section titled “record(text, **kwargs) → str”Store a memory. Returns the record ID.
rid = db.record( "User likes Python", importance=0.7, # 0.0-1.0 valence=0.3, # -1.0 to 1.0 (emotional) memory_type="semantic", # episodic, semantic, procedural domain="work", source="user", certainty=0.8, # 0.0-1.0 namespace="default",)record_batch(inputs) → list[str]
Section titled “record_batch(inputs) → list[str]”Store multiple memories at once. Each input is a dict with the same fields as record().
rids = db.record_batch([ {"text": "User likes Python", "importance": 0.7}, {"text": "User works at Acme Corp", "importance": 0.9},])recall(query, top_k=10, **kwargs) → list[dict]
Section titled “recall(query, top_k=10, **kwargs) → list[dict]”Retrieve memories using relevance-conditioned scoring.
results = db.recall( "What programming languages?", top_k=5, memory_type="semantic", # optional filter namespace="default", # optional filter)for r in results: print(f"[{r['score']:.3f}] {r['text']}")Returns a list of dicts with text, score, rid, importance, valence, certainty, etc.
recall_with_response(query, top_k=10) → dict
Section titled “recall_with_response(query, top_k=10) → dict”Like recall() but includes confidence calibration with certainty_reasons explaining why confidence is high or low.
resp = db.recall_with_response("What is the user's timezone?", top_k=3)# resp['results'], resp['certainty'], resp['certainty_reasons']recall_refine(original_query, refinement, top_k=5) → list[dict]
Section titled “recall_refine(original_query, refinement, top_k=5) → list[dict]”Refine a low-confidence recall with a follow-up query.
get(rid) → dict | None
Section titled “get(rid) → dict | None”Retrieve a specific memory by record ID.
forget(rid) → bool
Section titled “forget(rid) → bool”Tombstone a memory by record ID.
correct(rid, new_text, **kwargs) → str
Section titled “correct(rid, new_text, **kwargs) → str”Fix an incorrect memory. Preserves history and transfers relationships to the corrected version.
new_rid = db.correct(old_rid, "User works at Google, not Meta")list_memories(namespace=None, limit=100, offset=0) → list[dict]
Section titled “list_memories(namespace=None, limit=100, offset=0) → list[dict]”List all active memories with pagination.
decay(threshold=0.01) → list[dict]
Section titled “decay(threshold=0.01) → list[dict]”Run decay pass — returns memories that fell below the threshold and were archived.
stats(namespace=None) → dict
Section titled “stats(namespace=None) → dict”Get database statistics: active_memories, edges, entities, open_conflicts, etc.
Knowledge Graph
Section titled “Knowledge Graph”relate(src, dst, rel_type, weight=1.0)
Section titled “relate(src, dst, rel_type, weight=1.0)”Create a relationship in the cognitive graph.
db.relate("Alice", "Acme Corp", "works_at", weight=0.9)db.relate("Alice", "Rust", "prefers", weight=0.8)get_edges(entity) → list[dict]
Section titled “get_edges(entity) → list[dict]”Get all relationships for an entity.
search_entities(query, limit=20) → list[dict]
Section titled “search_entities(query, limit=20) → list[dict]”Find entities by name pattern.
entity_profile(entity, days=90.0, namespace=None) → dict
Section titled “entity_profile(entity, days=90.0, namespace=None) → dict”Get a comprehensive profile for an entity: relationship count, memory mentions, domain spread, activity timeline.
relationship_depth(entity, namespace=None) → dict
Section titled “relationship_depth(entity, namespace=None) → dict”Composite depth score (0.0–1.0) combining sessions together, memories mentioning, domains spanning, connection count, and relationship type diversity.
depth = db.relationship_depth("Alice")# depth['depth_score'], depth['sessions_together'], depth['domains_spanning'], ...Cognition
Section titled “Cognition”think(**kwargs) → dict
Section titled “think(**kwargs) → dict”Run autonomous cognition: consolidation, conflict detection, pattern mining, substitution category scanning, gossip triggers.
result = db.think( importance_threshold=0.5, run_consolidation=True, run_conflict_scan=True, run_pattern_mining=True,)# result['triggers'], result['consolidation_count'],# result['conflicts_found'], result['patterns_new']get_conflicts(status=None, conflict_type=None, entity=None, priority=None, limit=50) → list[dict]
Section titled “get_conflicts(status=None, conflict_type=None, entity=None, priority=None, limit=50) → list[dict]”List detected contradictions with optional filters.
conflicts = db.get_conflicts(status="open")for c in conflicts: print(f"{c['conflict_type']}: {c['detection_reason']}")resolve_conflict(conflict_id, resolution, note=None)
Section titled “resolve_conflict(conflict_id, resolution, note=None)”Resolve a contradiction: keep_a, keep_b, merge, keep_both.
dismiss_conflict(conflict_id, note=None)
Section titled “dismiss_conflict(conflict_id, note=None)”Dismiss a conflict without resolving it.
scan_conflicts() → list[dict]
Section titled “scan_conflicts() → list[dict]”Manually trigger conflict scanning (also runs inside think()).
get_patterns(limit=20) → list[dict]
Section titled “get_patterns(limit=20) → list[dict]”List all discovered behavioral patterns.
get_personality() → dict
Section titled “get_personality() → dict”Get the AI’s derived personality traits based on memory patterns.
derive_personality() → dict
Section titled “derive_personality() → dict”Recalculate personality traits from current memory state.
Substitution Categories (V14)
Section titled “Substitution Categories (V14)”YantrikDB maintains vocabularies of interchangeable terms (e.g., PostgreSQL/MySQL are both “databases”). When two memories differ only by a substitution, it’s flagged as a conflict rather than redundancy.
substitution_categories() → list[dict]
Section titled “substitution_categories() → list[dict]”List all substitution categories with member counts.
cats = db.substitution_categories()# [{"name": "databases", "conflict_mode": "exclusive", "member_count": 15}, ...]substitution_members(category_name) → list[dict]
Section titled “substitution_members(category_name) → list[dict]”List members of a specific category.
members = db.substitution_members("databases")# [{"token_normalized": "postgresql", "confidence": 0.95, "source": "seed"}, ...]learn_category_members(category_name, members, source="llm_suggested") → int
Section titled “learn_category_members(category_name, members, source="llm_suggested") → int”Add new members to a category. Returns count of new members added.
count = db.learn_category_members( "databases", [("tidb", 0.35), ("surrealdb", 0.35)], "llm_suggested")| Source | Confidence | Drives conflicts? |
|---|---|---|
seed | 0.95 | Yes (≥ 0.6 threshold) |
user_confirmed | 1.0 | Yes |
llm_suggested | 0.35 | No (below threshold) |
reclassify_conflict(conflict_id, new_type) → dict
Section titled “reclassify_conflict(conflict_id, new_type) → dict”Reclassify a conflict and teach the system. Extracts differing tokens and learns new category members from the correction.
result = db.reclassify_conflict(conflict_id, "preference")# result['learned_members'] — tokens added to categories from this feedbackreset_category_to_seed(category_name) → int
Section titled “reset_category_to_seed(category_name) → int”Remove all learned members from a category, keeping only seed vocabulary. Returns count of members removed.
removed = db.reset_category_to_seed("editors_tools")Sessions
Section titled “Sessions”session_start(namespace="default", client_id="default", metadata=None) → str
Section titled “session_start(namespace="default", client_id="default", metadata=None) → str”Start a new conversation session. Returns session ID.
session_end(session_id, summary=None) → dict
Section titled “session_end(session_id, summary=None) → dict”End a session with optional summary.
active_session(namespace="default", client_id="default") → dict | None
Section titled “active_session(namespace="default", client_id="default") → dict | None”Get the currently active session.
session_history(namespace="default", client_id="default", limit=10) → list[dict]
Section titled “session_history(namespace="default", client_id="default", limit=10) → list[dict]”Get recent session history.
session_abandon_stale(max_age_hours=24.0) → int
Section titled “session_abandon_stale(max_age_hours=24.0) → int”Clean up sessions that were never properly ended.
Temporal
Section titled “Temporal”stale(days=30.0, limit=50, namespace=None) → list[dict]
Section titled “stale(days=30.0, limit=50, namespace=None) → list[dict]”Find memories that haven’t been accessed in a while and may need verification.
upcoming(days=7.0, limit=50, namespace=None) → list[dict]
Section titled “upcoming(days=7.0, limit=50, namespace=None) → list[dict]”Find memories with upcoming temporal relevance (deadlines, events).
Procedural Memory
Section titled “Procedural Memory”Self-improving memory for strategies and behaviors. Tracks what works and adapts over time.
record_procedural(text, domain="general", task_context="", effectiveness=0.5, namespace="default") → str
Section titled “record_procedural(text, domain="general", task_context="", effectiveness=0.5, namespace="default") → str”Record a procedural memory (a strategy or approach that worked/didn’t work).
rid = db.record_procedural( "When user asks about code, show examples before explanation", domain="communication", effectiveness=0.8,)surface_procedural(query_embedding, query_text=None, domain=None, top_k=5, namespace=None) → list[dict]
Section titled “surface_procedural(query_embedding, query_text=None, domain=None, top_k=5, namespace=None) → list[dict]”Retrieve relevant procedural memories for a task context.
reinforce_procedural(rid, outcome) → bool
Section titled “reinforce_procedural(rid, outcome) → bool”Update a procedural memory’s effectiveness score based on outcome (EMA-based adaptation).
db.reinforce_procedural(rid, outcome=0.9) # worked welldb.reinforce_procedural(rid, outcome=0.2) # didn't work this timeprocedural_stats(namespace=None) → list[dict]
Section titled “procedural_stats(namespace=None) → list[dict]”Get procedural memory statistics by domain: count and average effectiveness.
Triggers & Lifecycle
Section titled “Triggers & Lifecycle”get_pending_triggers(limit=10) → list[dict]
Section titled “get_pending_triggers(limit=10) → list[dict]”Get undelivered proactive triggers.
deliver_trigger(trigger_id) → bool
Section titled “deliver_trigger(trigger_id) → bool”Mark a trigger as delivered to the agent.
acknowledge_trigger(trigger_id) → bool
Section titled “acknowledge_trigger(trigger_id) → bool”Mark a trigger as acknowledged by the agent.
act_on_trigger(trigger_id) → bool
Section titled “act_on_trigger(trigger_id) → bool”Mark a trigger as acted upon.
dismiss_trigger(trigger_id) → bool
Section titled “dismiss_trigger(trigger_id) → bool”Dismiss a trigger without acting on it.
get_trigger_history(limit=20, trigger_type=None) → list[dict]
Section titled “get_trigger_history(limit=20, trigger_type=None) → list[dict]”Get trigger history with optional type filter.
Recall Feedback
Section titled “Recall Feedback”recall_feedback(query, chosen_rid, rejected_rids=None) → bool
Section titled “recall_feedback(query, chosen_rid, rejected_rids=None) → bool”Teach the scoring engine which results were useful. Improves retrieval quality over time.
db.recall_feedback( query="What's the user's timezone?", chosen_rid="019d...", rejected_rids=["019e...", "019f..."],)learned_weights() → dict
Section titled “learned_weights() → dict”Inspect the current learned scoring weights.
Embedding
Section titled “Embedding”set_embedder(fn_or_model)
Section titled “set_embedder(fn_or_model)”Set the embedding function or model.
from sentence_transformers import SentenceTransformermodel = SentenceTransformer("all-MiniLM-L6-v2")
# Pass model directly (recommended)db = YantrikDB("memory.db", embedding_dim=384, embedder=model)
# Or set after constructiondb.set_embedder(lambda text: model.encode(text).tolist())embed(text) → list[float]
Section titled “embed(text) → list[float]”Generate an embedding for text using the configured embedder.
Sync & Replication
Section titled “Sync & Replication”extract_ops_since(hlc=None, limit=1000) → list[dict]
Section titled “extract_ops_since(hlc=None, limit=1000) → list[dict]”Extract oplog entries since a given HLC for CRDT sync.
apply_ops(ops) → dict
Section titled “apply_ops(ops) → dict”Apply remote operations (merge with conflict resolution).
archive(rid) → bool
Section titled “archive(rid) → bool”Move a memory to cold storage.
hydrate(rid) → bool
Section titled “hydrate(rid) → bool”Restore a memory from cold storage.