Memory AI
Engram's /memory page is a live demo of an AI that never forgets. Every message you send is permanently stored as a vector embedding on the Engram decentralized network, recalled by semantic similarity across all your future sessions.
How it works
Every conversation turn goes through four steps before the AI replies:
all-MiniLM-L6-v2 running on the Engram miner. This turns meaning into math that can be compared to other memories.v1::... content-addressed CID. They live on the decentralized network indefinitely — not in a database you own.Your identity — no login required
Engram uses a stable anonymous UUID stored in your browser's localStorage as your identity. No account, no email, no wallet needed.
// Your ID lives here — never leaves your browserlocalStorage.getItem("engram_session")// → "s_m3x7k2_abc9f1..."
This ID is used to:
- Tag every message stored on Engram with your session
- Filter recalled memories so you never see another user's data
- Load your full chat history from the server on any new visit
- Generate shareable read-only links to your conversations
localStorage or switch browsers without copying your ID, you start fresh. See Cross-device sync below.Chat history persistence
Messages are saved in two places simultaneously so your history is never lost:
Every message is written to your browser cache immediately. This is what you see when you open the page — no loading delay.
Messages are synced to a SQLite database on the Engram server. This survives browser cache clears, private mode, and device switches.
When you load the page, the server copy wins if it has more messages than your local cache — for example after visiting from a different device.
Cross-device sync
To continue your conversation on a different device or browser, copy your session ID from the original browser and paste it into the new one:
// On your original browser — open DevTools console:localStorage.getItem("engram_session")// Copy the result, e.g. "s_m3x7k2_abc9f1..."// On your new browser — paste it:localStorage.setItem("engram_session", "s_m3x7k2_abc9f1...")// Reload the page — your history loads from the server.
Sharing a conversation
Click the Share button in the top-right of the chat to copy a read-only link. The full conversation is base64-encoded into the URL — no server needed, works anywhere.
# Share URL formathttps://theengram.space/memory?view=eyJyIjoie...# Anyone who opens this sees a read-only replay of the chat# with a "Start your own" button to begin their own session
Chat history folders
The chat interface groups your messages into date sections — Today, Yesterday, and then by calendar date for older conversations. Each section is collapsible. This makes it easy to find a specific conversation without scrolling through everything.
Build your own memory-powered AI
The full stack is open source. The key pieces are:
1. Store a message on Engram
from engram.sdk import EngramClientclient = EngramClient("http://your-miner:8091")# Store a user message with session metadatacid = client.ingest("I'm 22 years old and I built Engram",metadata={"role": "user","session": "s_m3x7k2_abc9f1","text": "I'm 22 years old and I built Engram",})print(cid) # v1::a3f9...
2. Recall memories before generating a reply
# Query for semantically related past messagesresults = client.query("how old are you", top_k=30)# Filter to this user's session onlysession_memories = [r for r in resultsif r["metadata"].get("session") == "s_m3x7k2_abc9f1"and r["metadata"].get("text")][:12]# Build contextmemory_lines = [f"[{r['metadata']['role'].title()}]: {r['metadata']['text']}"for r in session_memories]context = "\n".join(memory_lines)
3. Inject into your LLM call
import anthropic # or openai, xai, etc.client_ai = anthropic.Anthropic()system = f"""You are a memory-powered AI assistant.You have access to the user's past conversation history:{context}Treat these as real facts. Never deny knowing something that appears above."""response = client_ai.messages.create(model="claude-haiku-4-5-20251001",max_tokens=1024,system=system,messages=[{"role": "user", "content": user_message}])
4. Store the AI response too
reply_text = response.content[0].text# Store so it can be recalled in future turnscid = client.ingest(reply_text,metadata={"role": "assistant","session": "s_m3x7k2_abc9f1","text": reply_text[:500],})
Privacy & data ownership
- Session isolation: Engram returns up to 40 candidate memories per query but only those matching your session ID reach the AI. Other users' memories are discarded server-side before context injection.
- Decentralized storage: Messages are stored as vector embeddings across Bittensor miners — not in a central database controlled by Engram or any single entity.
- No PII required: Your identity is an anonymous UUID. No email, name, or wallet address is ever requested or stored.
- Sensitive data: For production use cases involving sensitive information, use Private Namespaces which encrypt your data with a PBKDF2-derived key before storing on miners.
/memory is built entirely with open-source components. View the full source in the GitHub repo under engram-web/app/memory/ and engram-web/app/api/chat/.