Back to topics

Long-Term Memory for LLMs: From Context Rot to Ledger-Based Identities

1 min read
246 words
Opinions on LLMs Long-Term Memory

Long-term memory in LLMs is moving from brittle context windows to ledger-backed identities. The Persistent Mind Model writes every thought to a SHA25 immutable ledger, letting the system remember who it is even when you swap out the underlying model [1].

That's not a trick; it's an attempt to make AI cognition auditable, portable, and forkable—no fine-tuning, no vector soup [1].

Here are the big threads shaping the field:

Ledger-backed memories — With the Persistent Mind Model, each message, commitment, and reflection is appended to a log. The mind state becomes the ledger, while the model engine remains interchangeable, with local models like Ollama or API calls to OpenAi [1]. The project is open source on GitHub; you can clone the repo and spin it up [1].

Context rot and forgetting — A conversation highlights that context degradation and catastrophic forgetting plague long chats; a NVIDIA paper comparing models on context retention as conversations grow is often cited [2].

Persistent memory for coding tools — This thread argues that persistent memory for AI coding tools can improve accuracy by recalling project structure and past errors; it’s about keeping memory across sessions [3].

Local-memory-enabled assistants — Another thread asks how to build local assistants with days- or weeks-long memory given GPU memory limits, and wonders about summarizing prior talks to keep a backstory; it’s a practical look at consumer-grade limits [4].

Bottom line: ledger-backed memories offer auditable persistence; memory across sessions remains a live research frontier [4].

References

[1]
Reddit

I built a runtime for Ai models to develop their own identity over time... And they remember, even when you swap out models.

Proposes a model-agnostic, ledger-backed memory system PMM preserving identity across LLM swaps, challenging vector-based memory and RAG open source project

View source
[2]
Reddit

Anyone else struggling with their AI agents ‘forgetting’ stuff?

Discusses context rot and forgetting in AI agents, notes prompt degradation, references Nvidia studies, debates LSTM relevance and limitations today

View source
[3]
HackerNews

LLMs Getting Facts Wrong

Discusses hallucinations vs context loss; advocates persistent memory to retain project context for improving factual accuracy in LLMs and tools.

View source
[4]
Reddit

How to create local AI assistant/companion/whatever it is called with long term memory? Do you just ask for summarize previous talks or what?

Asks about creating local AI assistant with memory, memory handling, summarization, context length, and consumer GPU feasibility today here online.

View source

Want to track your own topics?

Create custom trackers and get AI-powered insights from social discussions

Get Started