EngramEngramDocs

Engram Documentation

Engram is a decentralized vector database on Bittensor — permanent, content-addressed semantic memory for AI applications. No central server, no single point of failure.

What is Engram?

Engram applies the IPFS insight to AI memory: every piece of knowledge gets a CID derived deterministically from its embedding. The same text always maps to the same CID — regardless of which miner stores it.

python
from engram.sdk import EngramClient
client = EngramClient("http://127.0.0.1:8091")
# Store text — returns a permanent CID
cid = client.ingest("The transformer architecture changed everything.")
print(cid) # v1::a3f2b1c4d5e6f7a8b9c0d1e2f3a4b5c6
# Semantic search — no exact match needed
results = client.query("how does attention work?", top_k=5)
for r in results:
print(f"{r['score']:.4f} {r['cid']}")
Tip
The CLI defaults to a local FAISS store and sentence-transformers embedder — no OpenAI key or running node needed to get started.

How it works

01
IngestText is embedded (OpenAI or local), assigned a SHA-256 CID, and stored in the miner's FAISS index.
02
StoreMiners compete to store vectors and earn TAO. Replication across multiple miners ensures durability.
03
ProveValidators issue HMAC challenge-response proofs to verify miners actually hold the data.
04
QuerySemantic search runs ANN over the FAISS index — return the top-K most similar embeddings by cosine similarity.
05
ScoreValidators score miners on recall accuracy, latency, and proof success rate, then set on-chain weights.

Network

Status
Testnet
Subnet UID
42
Proof type
HMAC-SHA256
Vector index
FAISS IVF-flat
Embedding dim
1536d
Scoring interval
120 seconds
engram docs · v0.1edit on github →