Long-term memory in LLMs is moving from brittle context windows to ledger-backed identities. The Persistent Mind Model writes every thought to a SHA25 immutable ledger, letting the system remember who it is even when you swap out the underlying model [1].
That's not a trick; it's an attempt to make AI cognition auditable, portable, and forkable—no fine-tuning, no vector soup [1].
Here are the big threads shaping the field:
Ledger-backed memories — With the Persistent Mind Model, each message, commitment, and reflection is appended to a log. The mind state becomes the ledger, while the model engine remains interchangeable, with local models like Ollama or API calls to OpenAi [1]. The project is open source on GitHub; you can clone the repo and spin it up [1].
Context rot and forgetting — A conversation highlights that context degradation and catastrophic forgetting plague long chats; a NVIDIA paper comparing models on context retention as conversations grow is often cited [2].
Persistent memory for coding tools — This thread argues that persistent memory for AI coding tools can improve accuracy by recalling project structure and past errors; it’s about keeping memory across sessions [3].
Local-memory-enabled assistants — Another thread asks how to build local assistants with days- or weeks-long memory given GPU memory limits, and wonders about summarizing prior talks to keep a backstory; it’s a practical look at consumer-grade limits [4].
Bottom line: ledger-backed memories offer auditable persistence; memory across sessions remains a live research frontier [4].
References
I built a runtime for Ai models to develop their own identity over time... And they remember, even when you swap out models.
Proposes a model-agnostic, ledger-backed memory system PMM preserving identity across LLM swaps, challenging vector-based memory and RAG open source project
View sourceAnyone else struggling with their AI agents ‘forgetting’ stuff?
Discusses context rot and forgetting in AI agents, notes prompt degradation, references Nvidia studies, debates LSTM relevance and limitations today
View sourceLLMs Getting Facts Wrong
Discusses hallucinations vs context loss; advocates persistent memory to retain project context for improving factual accuracy in LLMs and tools.
View sourceHow to create local AI assistant/companion/whatever it is called with long term memory? Do you just ask for summarize previous talks or what?
Asks about creating local AI assistant with memory, memory handling, summarization, context length, and consumer GPU feasibility today here online.
View source