Add lifelong memory to your AI apps

LLMs are great at reasoning, but terrible at remembering.
Your users start every chat from zero — and your agents lose context the moment a session ends.

Medhara SDK gives your product continuity — a simple API to capture evolving user context, build long-term memory graphs, and recall meaningfully — not just by similarity.

Why Developers Need Memory

RAG gives you knowledge. Memory gives you personalization.
Use RAG for general truth. Use Medhara for user truth.

Use RAG For

  • Documentation, specs, FAQs
  • Research & domain data
  • Static company knowledge
  • One-time Q&A

Use Medhara For

  • User preferences & goals
  • Behavioral patterns
  • Conversations that evolve
  • Lifelong personalization

When you combine both, your app stops being a search box and becomes a companion that learns.

Why Personalization Matters

Remember how Google beat other search engines? They didn’t just show links — they learned from you. What you clicked, ignored, lingered on. That’s what made Google feel intelligent.

The same shift is coming for AI. LLMs will no longer just complete prompts; they’ll understand individuals.

Medhara lets you bring that capability to your product — safely, modularly, and with full control.

What You Can Build

Context-aware chatbots that remember every conversation
Adaptive copilots that evolve with user habits
Personalized recommendation engines for content, code, or learning
Autonomous agents that retain goals, feedback, and task history

Add memory once, and every interaction becomes smarter over time.

Quick Example

Here’s how easy it is to give your agents memory using the Medhara SDK:

from medhara import Client
client = Client(api_key="your_key_here")

# 1️⃣ Add general knowledge (RAG-style)
client.memories.add(
    content="Python decorators wrap functions for reuse."
)

# 2️⃣ Add user-specific memory
client.memories.add(
    content="User prefers concise explanations with examples.",
    container_tags=["user_123"],
    metadata={"type": "preference", "confidence": "high"}
)

# 3️⃣ Hybrid retrieval
results = client.memories.search(
    query="Explain decorators again",
    container_tags=["user_123"]
)

# Result:
# Your chatbot doesn’t just know what decorators are —
# it knows how your user likes them explained.

How It Works

  1. Embed & understand — turn any text, chat, or event into semantic units.
  2. Link & evolve — create contextual relationships and timelines.
  3. Promote or decay — prioritize important memories, fade the rest.
  4. Retrieve & reason — recall context not just by keywords, but meaning.

Features

Graph-linked memoryEntities & relationships across time
API-first designSimple Python & JS SDKs
Temporal awarenessRecency and evolution tracking
Hybrid RAG supportCombine with any retrieval system
Private containersIsolate user or workspace memories
Memory lifecycleCreation → Promotion → Decay → Recall

Integrate Medhara in Your Stack

LayerExample
FrontendNext.js / React app calling Medhara API
BackendNodeJS, FastAPI, or LangChain agents
StorageWorks alongside Pinecone, Postgres, or Mongo
LLM OrchestrationCompatible with LangGraph, CrewAI, AutoGen

Medhara fits where your embeddings end — and true understanding begins.

The Vision for Builders

You gave your agents tools. You gave them retrieval. Now, give them memory.

With Medhara, every user, team, and agent gets its own evolving memory graph — so your product feels personal, alive, and impossible to replace.

Ready to add memory to your stack?

Join the early developer beta and bring lifelong personalization to your LLM apps.