EngramEngramDocs

LangChain Integration

EngramVectorStore implements the LangChain VectorStore interface — drop Engram into any LangChain pipeline.

Install

bash
pip install langchain-core langchain-openai engram-subnet

Basic usage

python
from langchain_openai import OpenAIEmbeddings
from engram.sdk.langchain import EngramVectorStore
embeddings = OpenAIEmbeddings()
store = EngramVectorStore(
miner_url="http://127.0.0.1:8091",
embeddings=embeddings, # omit to use miner's built-in embedder
)
# Store documents
store.add_texts(
["BERT uses bidirectional transformers.", "GPT generates text autoregressively."],
metadatas=[{"source": "paper"}, {"source": "paper"}],
)
# Similarity search
docs = store.similarity_search("how does attention work?", k=5)
for doc in docs:
print(doc.page_content, doc.metadata)
# With scores
docs_and_scores = store.similarity_search_with_score("transformers", k=3)
for doc, score in docs_and_scores:
print(f"{score:.4f} — {doc.page_content[:60]}")
Tip
If embeddings is omitted, the miner's built-in sentence-transformers model is used. Pass an embeddings object to use OpenAI, Cohere, HuggingFace, etc.

As a retriever

python
retriever = store.as_retriever(search_kwargs={"k": 5})
# Use in any chain
docs = retriever.invoke("what is Bittensor?")

RetrievalQA chain

python
from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
retriever = store.as_retriever(search_kwargs={"k": 5})
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
answer = chain.run("How does Bittensor distribute rewards?")
print(answer)
engram docs · v0.1edit on github →