Wire Protocol
YantrikDB Server speaks a custom binary wire protocol on port 7437 alongside the JSON HTTP gateway on port 7438. The wire protocol is faster (multiplexed, binary-encoded), supports streaming responses, and is the same protocol nodes use to talk to each other.
The codec is published as the yantrikdb-protocol crate.
Frame format
Section titled “Frame format”Every message is a length-prefixed frame:
┌──────────┬──────────┬──────────┬──────────┬────────────┐│ Length │ Version │ OpCode │ StreamID │ Payload ││ (4 bytes)│ (1 byte) │ (1 byte) │ (4 bytes)│ (variable) │└──────────┴──────────┴──────────┴──────────┴────────────┘- Length: big-endian u32, total bytes after this field
- Version: protocol version + flag bits
- Bit 7 (0x80): JSON payload mode (debug)
- Bit 6 (0x40): zstd-compressed payload
- OpCode: u8, identifies the message type (see table below)
- StreamID: big-endian u32, allows multiplexing concurrent requests on one connection
- Payload: MessagePack-serialized message struct, optionally zstd-compressed
Connection lifecycle
Section titled “Connection lifecycle”Client Server │ │ ├── AUTH(token="ydb_abc123") ──────────► │ │ ├── validate token │ ◄──────────────── AUTH_OK(db="default")│ │ │ ├── REMEMBER(text="...") ──────────────► │ │ ◄──────── REMEMBER_OK(rid="mem_01") │ │ │ ├── RECALL(query="...") ────────────────► │ │ ◄──────── RECALL_RESULT(mem_01, 0.93) │ │ ◄──────── RECALL_RESULT(mem_07, 0.71) │ │ ◄──────── RECALL_END(total=2) │ │ │The recall response is streamed — each result is its own frame, ended by a RECALL_END frame. This lets clients start processing results before the server has finished computing them all.
OpCode table
Section titled “OpCode table”Auth (0x01–0x03)
Section titled “Auth (0x01–0x03)”| Code | Name | Direction |
|---|---|---|
| 0x01 | AUTH | C→S |
| 0x02 | AUTH_OK | S→C |
| 0x03 | AUTH_FAIL | S→C |
Memory (0x20–0x32)
Section titled “Memory (0x20–0x32)”| Code | Name | Direction |
|---|---|---|
| 0x20 | REMEMBER | C→S |
| 0x21 | REMEMBER_OK | S→C |
| 0x22 | REMEMBER_BATCH | C→S |
| 0x30 | RECALL | C→S |
| 0x31 | RECALL_RESULT | S→C (streamed) |
| 0x32 | RECALL_END | S→C |
Graph (0x40–0x43)
Section titled “Graph (0x40–0x43)”| Code | Name |
|---|---|
| 0x40 | RELATE |
| 0x41 | RELATE_OK |
| 0x42 | EDGES |
| 0x43 | EDGES_RESULT |
Lifecycle (0x50–0x71)
Section titled “Lifecycle (0x50–0x71)”| Code | Name |
|---|---|
| 0x50 | FORGET |
| 0x51 | FORGET_OK |
| 0x60 | SESSION_START |
| 0x61 | SESSION_END |
| 0x62 | SESSION_OK |
| 0x70 | THINK |
| 0x71 | THINK_RESULT |
Cluster / Replication (0xC0–0xCF)
Section titled “Cluster / Replication (0xC0–0xCF)”| Code | Name | Direction |
|---|---|---|
| 0xC0 | CLUSTER_HELLO | peer→peer |
| 0xC1 | CLUSTER_HELLO_OK | peer→peer |
| 0xC2 | OPLOG_PULL | peer→peer |
| 0xC3 | OPLOG_PULL_RESULT | peer→peer |
| 0xC4 | OPLOG_PUSH | peer→peer |
| 0xC5 | OPLOG_PUSH_OK | peer→peer |
| 0xC6 | HEARTBEAT | leader→follower |
| 0xC7 | HEARTBEAT_ACK | follower→leader |
| 0xC8 | REQUEST_VOTE | candidate→voter |
| 0xC9 | VOTE_GRANTED | voter→candidate |
| 0xCA | VOTE_DENIED | voter→candidate |
| 0xCB | CLUSTER_STATUS | C→S |
| 0xCC | CLUSTER_STATUS_RESULT | S→C |
| 0xCD | READONLY_ERROR | S→C |
| 0xCE | CLUSTER_DATABASE_LIST | peer→peer |
| 0xCF | CLUSTER_DATABASE_LIST_RESULT | peer→peer |
Control (0xF0–0xF2)
Section titled “Control (0xF0–0xF2)”| Code | Name |
|---|---|
| 0xF0 | ERROR |
| 0xF1 | PING |
| 0xF2 | PONG |
Compression
Section titled “Compression”Large payloads (oplog batches, recall results) are auto-compressed with zstd when they exceed 4KB. The compression flag (bit 6 of the version byte) tells the receiver to decompress before unpacking the MessagePack body.
Compressed payloads typically reach 3–5× compression ratio for natural language memory text.
Implementing a client
Section titled “Implementing a client”You have three options:
- Use the Rust crate:
cargo add yantrikdb-protocol— gives you the full Frame/codec/message types - Use the HTTP gateway: simpler, just JSON over HTTPS — most clients should do this
- Implement from scratch: follow the frame format above. The MessagePack message structs are documented in the protocol crate’s source code
Why a custom protocol?
Section titled “Why a custom protocol?”YantrikDB workloads are chatty (5–20 ops per agent turn), session-aware, and benefit from streaming. HTTP works fine for occasional clients but adds overhead per request. The wire protocol gives:
- Lower latency — single connection, no per-request HTTP parsing
- Multiplexing — multiple streams on one connection
- Streaming recall — process results as they arrive
- Server push — events (conflicts, decay, triggers) flow without polling
- Native binary — no JSON parse on the hot path
The HTTP gateway is the universal interface; the wire protocol is the optimized one.