SQL, vectors, and graphs collide in AI memory—and the twist is surprisingly simple: relational databases might outlast flashy stores for persistence. In 2025, the argument to go back to SQL is getting louder as teams chase reliable memory across long conversations [1].
What failed—and what works today - Vector databases – semantic recall is strong, with Pinecone and Weaviate, but retrieval can be noisy and lose structure [1]. - Graph databases – great for reasoning, yet scaling and maintenance stay tricky [1]. - Hybrid systems – flexible, but coordinating vectors, graphs, and relational data adds complexity [1]. - Relational databases – the Memori approach uses SQL tables to separate short-term vs long-term memory, store entities, rules, and preferences, promote important facts into permanent memory, and use joins and indexes for retrieval [1]. The project comes from Gibson as an open-source memory engine [1].
The Relational Pivot Relational memory isn’t nostalgia—it’s practical. Memori aims to give AI agents human-like memory by keeping memories in structured records and leveraging classic SQL querying to recombine context [1].
Under the hood A memory search agent shows how retrieval is planned: a SYSTEM_PROMPT guides intent analysis, search parameter extraction, planning, and filtering to surface relevant memories [1].
Closing thought Maybe 50 years of “SELECT * FROM reality” still beating the latest semantic embeddings is the plot twist we needed [1].
POST IDs referenced: 1
References
Everyone's trying vectors and graphs for AI memory. We went back to SQL
Discusses AI memory storage options: SQL vs vector vs graph; argues relational memory can beat embeddings; explores retrieval and scalability.
View source