Add lifelong memory to your AI apps
LLMs are great at reasoning, but terrible at remembering.
Your users start every chat from zero — and your agents lose context the moment a session ends.
Medhara SDK gives your product continuity — a simple API to capture evolving user context, build long-term memory graphs, and recall meaningfully — not just by similarity.
Why Developers Need Memory
RAG gives you knowledge. Memory gives you personalization.
Use RAG for general truth. Use Medhara for user truth.
Use RAG For
- Documentation, specs, FAQs
- Research & domain data
- Static company knowledge
- One-time Q&A
Use Medhara For
- User preferences & goals
- Behavioral patterns
- Conversations that evolve
- Lifelong personalization
When you combine both, your app stops being a search box and becomes a companion that learns.
Why Personalization Matters
Remember how Google beat other search engines? They didn’t just show links — they learned from you. What you clicked, ignored, lingered on. That’s what made Google feel intelligent.
The same shift is coming for AI. LLMs will no longer just complete prompts; they’ll understand individuals.
Medhara lets you bring that capability to your product — safely, modularly, and with full control.
What You Can Build
Add memory once, and every interaction becomes smarter over time.
Quick Example
Here’s how easy it is to give your agents memory using the Medhara SDK:
from medhara import Client
client = Client(api_key="your_key_here")
# 1️⃣ Add general knowledge (RAG-style)
client.memories.add(
content="Python decorators wrap functions for reuse."
)
# 2️⃣ Add user-specific memory
client.memories.add(
content="User prefers concise explanations with examples.",
container_tags=["user_123"],
metadata={"type": "preference", "confidence": "high"}
)
# 3️⃣ Hybrid retrieval
results = client.memories.search(
query="Explain decorators again",
container_tags=["user_123"]
)
# Result:
# Your chatbot doesn’t just know what decorators are —
# it knows how your user likes them explained.How It Works
- Embed & understand — turn any text, chat, or event into semantic units.
- Link & evolve — create contextual relationships and timelines.
- Promote or decay — prioritize important memories, fade the rest.
- Retrieve & reason — recall context not just by keywords, but meaning.
Features
| Graph-linked memory | Entities & relationships across time |
| API-first design | Simple Python & JS SDKs |
| Temporal awareness | Recency and evolution tracking |
| Hybrid RAG support | Combine with any retrieval system |
| Private containers | Isolate user or workspace memories |
| Memory lifecycle | Creation → Promotion → Decay → Recall |
Integrate Medhara in Your Stack
| Layer | Example |
|---|---|
| Frontend | Next.js / React app calling Medhara API |
| Backend | NodeJS, FastAPI, or LangChain agents |
| Storage | Works alongside Pinecone, Postgres, or Mongo |
| LLM Orchestration | Compatible with LangGraph, CrewAI, AutoGen |
Medhara fits where your embeddings end — and true understanding begins.
The Vision for Builders
You gave your agents tools. You gave them retrieval. Now, give them memory.
With Medhara, every user, team, and agent gets its own evolving memory graph — so your product feels personal, alive, and impossible to replace.
Ready to add memory to your stack?
Join the early developer beta and bring lifelong personalization to your LLM apps.