Stash — Your AI has amnesia. We fixed it.
We believe that this document is a mix of AI-generated, and human-written content.
Hacker News Article AI Analysis
Content Label
Mixed
AI Generated
60%
Human
40%
Window 1 - 90% AI-Generated
stash.memory
What
Namespaces
See It
vs RAG
Quick Start
Pipeline
MCP
Backends
→ GitHub
Open Source · MCP Native · PostgreSQL + pgvector
Stash makes your AI remember you. Every session. Forever. No more explaining yourself from scratch.
28MCP tools6Pipeline stages∞Agents supported
Sound familiar?😫 Without StashHey, I'm building a SaaS for restaurants. Can you help?Of course! Tell me about your project.We talked about this last week... I already explained everything.I'm sorry, I don't have access to previous conversations....again?🔁 You just wasted 10 minutes re-explaining yourself. Again.VS😌 With StashHey, continuing work on my project.Welcome back! Last time we finalized the pricing model for your restaurant SaaS. You were about to work on the onboarding flow. Want to pick up there?Yes! Exactly that.Great. You also mentioned you wanted to avoid Stripe's complexity — I have that noted. Here's where we left off...✓ Picked up instantly. Zero repetition. Full context.
New session❌ "Who are you again?"✓ Picks up where you left offYour preferences❌ Re-explain every time✓ Already knows themPast mistakes❌ Repeats the same errors✓ Remembers what didn't workLong projects❌ Loses track of goals✓ Tracks goals across weeksToken cost❌ Grows every session✓ Only recalls what mattersSwitching models❌ Start from zero again✓ Memory is model-agnostic
What is StashNot just memory.A second brain.Stash is a persistent cognitive layer that sits between your AI agent and the world. It doesn't replace your model — it makes your model continuous. Episodes become facts. Facts become patterns. Patterns become wisdom."Your AI is the brain.
Window 2 - 68% AI-Generated
Stash is the life experience."your agentClaude, GPT, local model, anythingepisodesRaw observations, append-onlyfactsSynthesized beliefs with confidencerelationshipsEntity knowledge graphpatternsHigher-order abstractionsgoals · failures · hypothesesIntent, learning, uncertaintypostgres + pgvectorBattle-tested infrastructure
NamespacesMemory organizedlike folders.Not all memory is equal. What your agent learns about you is different from what it learns about a project, which is different from what it knows about itself. Namespaces let the agent organize what it learns into clean, separate buckets — just like folders on your computer.Each namespace is a path. Paths are hierarchical. Reading from /projects automatically includes everything under /projects/stash, /projects/cartona, and so on. You never have to think about it — the agent does.📁
Write to one namespace. Read from any subtree.example namespace structure📁
/
everything📁
/users/alice
who alice is, her preferences📁
/projects
all projects📁
/projects/restaurant-saas
pricing, features, decisions📁
/projects/mobile-app
design, tech stack, goals📁
/self
agent self-knowledge📄
/self/capabilities
what I do well📄
/self/limits
what I struggle with📄
/self/preferences
how I work best🔍Recursive readsRecall from /projects and get everything across all sub-projects automatically.✏️Precise writesRemember always targets one exact namespace — no accidental cross-contamination.🔒Clean separationUser memory never mixes with project memory. Agent self-knowledge stays in /self.
agent session
Stash vs RAGRAG gives your AIa search engine.Stash gives it a life.You've probably heard of RAG — Retrieval Augmented Generation. It's clever. But it's not memory. Here's the difference, in plain English.📚 RAG"A very fast librarian"You give it a pile of documents.
Window 3 - 61% AI-Generated
When you ask a question, it searches those documents and hands you the relevant pages. That's it. It doesn't remember your conversation. It doesn't learn. It doesn't know you. Every question starts from scratch — it's just a smarter search engine over files you already wrote.
Only knows what's in your documents
Cannot learn from conversations
Cannot track goals or intentions
Cannot reason about cause and effect
Cannot notice contradictions over time
Stateless — no continuity whatsoever
You must write the knowledge first
VS🧠 Stash"A mind that grows"Stash learns from everything your agent experiences — conversations, decisions, successes, failures. It synthesizes raw observations into facts, connects facts into a knowledge graph, detects contradictions, tracks goals, and builds an understanding of you that deepens over time. You don't write anything. It figures it out.
Learns from every conversation automatically
Builds a knowledge graph over time
Tracks your goals across weeks and months
Reasons about cause and effect
Self-corrects when beliefs contradict
Continuous — picks up exactly where you left off
Creates knowledge — you don't have to
📚RAG is like...A brilliant intern who reads your files perfectly — but forgets everything the moment they leave the room.→🧠Stash is like...A colleague who was there from day one, remembers every decision you ever made, and gets more valuable every single week.
Window 4 - Human
Can you use both? Yes — RAG is great for searching documents. Stash is for remembering experience. They solve different problems. Stash just goes much, much further.
Why Stash is DifferentEveryone gave AIa notepad.We gave it a mind.Claude.ai has memory. ChatGPT has memory. They only work for themselves — locked to one platform, one model, one company. Stash works for everyone, everywhere, forever. And it goes far deeper than any of them.Remembers you✓✓✓Works with any AI model✗✗✓Works with local / private models✗✗✓You own your data✗✗✓Open source✗✗✓Background consolidation✗✗✓Goals & intent tracking✗✗✓Learns from failures✗✗✓Causal reasoning✗✗✓Agent self-model✗✗✓What it gives your AIA notepadA notepadA mind
The Problem🧠Brilliant brain, no experienceAI models reason brilliantly but remember nothing. Every session you re-explain who you are, what you need, and what you've already tried. You're training the same student every single day.💸Context windows are expensiveThe workaround is stuffing full conversation history into every prompt.
Window 5 - 82% AI-Generated
It's slow, expensive, and you still hit the limit. You're paying for tokens that repeat the same facts over and over.🔄Agents repeat their mistakesYour agent tried something, it failed, and next session it tries the exact same thing again. There's no mechanism to carry lessons forward. Every failure is forgotten.🔒Memory is a platform privilegeOnly a handful of AI platforms offer memory — and only for their own models. Your custom agent, your local LLM, your Cursor setup? They all start blind. Memory shouldn't be a premium feature.
Express SetupUp and runningin 3 commands.No infrastructure to set up. No dependencies to install manually. Docker Compose handles everything — Postgres, pgvector, Stash, all wired together and ready.1Clone the repo2Copy .env.example → .env and set your API key + model preferences3Run docker compose up — that's it. Stash is live.terminal$ git clone https://github.com/alash3al/stash$ cd stash$ cp .env.example .env # edit .env with your API key, # models and STASH_VECTOR_DIM$ docker compose up✓ postgres + pgvector ready✓ stash migrations applied✓ mcp server listening✓ consolidation running in background$ ⚠️
Set STASH_VECTOR_DIM in your .env before first run. It cannot be changed after initialization.
01📝EpisodesRaw observations stored as they happen02💡FactsClustered episodes synthesized by LLM03🕸️RelationshipsEntity edges extracted from facts04🔗Causal LinksCause-effect pairs between facts05🌀PatternsAbstract higher-order insights06⚖️ContradictionsSelf-correction and confidence decayNEW07🎯Goal InferenceFacts automatically tracked against active goals. Progress detected, contradictions surfaced.NEW08💥Failure PatternsDetect repeated mistakes.
Window 6 - Human
Extract failure patterns as new facts. The agent stops repeating itself.NEW09🔬Hypothesis ScanNew evidence passively confirms or rejects open hypotheses. No manual intervention needed.
MCP IntegrationTwo commands.Any agent.Stash speaks MCP natively. Drop it into Claude Desktop, Cursor, or any MCP-compatible agent in under 5 minutes. No SDK. No vendor lock-in. Your agent remembers you everywhere.28 tools covering the full cognitive stack — from raw remember and recall all the way to causal chains, contradiction resolution, and hypothesis management.Claude Desktop
Cursor
OpenCode
Custom Agents
Local LLMs
Any MCP Clientstash · mcp stdio$ ./stash mcp execute --with-consolidation$ ./stash mcp serve --port 8080 --with-consolidation✓ remember · recall · forget · init✓ goals · failures · hypotheses✓ consolidate · query_facts · relationships✓ causal links · contradictions✓ namespaces · context · self-model$
Agent Self-ModelYour agent canknow itself.Call init and Stash creates a /self namespace scaffold. The agent uses its own memory layer to build and maintain a model of its own capabilities, limits, and preferences./self/capabilitiesWhat I can do wellThe agent remembers where it excels and recalls these when planning how to approach a task./self/limitsWhat I struggle withRecorded failures and known weaknesses. The anti-repeat mechanism. Never make the same mistake twice./self/preferencesHow I work bestLearned preferences for how to operate. The agent develops a working style over time, not just facts.
Autonomous LoopAn agent thatnever stops learning.Give your agent a 5-minute research loop. It orients from past memory, researches a topic it chooses itself, invents new connections, consolidates what it learned, and closes gracefully — ready to pick up next time.Run it as a cron job. Every 5 minutes, your agent gets smarter.→ See the loop prompt01OrientRecall context, active goals, open hypotheses, past failures02ResearchSearch the web on a topic the agent chooses itself03ThinkSurface tensions, gaps, contradictions in what it now knows04InventGenerate something new — a hypothesis, pattern, or discovery05ConsolidateRun the pipeline.