Vector databases power semantic search, but a growing pattern favors on-demand vector loading to cut context bloat [1]. The clash with relational modeling—turning movies into tables [3]—isn't about a winner, but about which approach fits the task at hand.
Embedding-driven approaches shine for natural-language queries and retrieval. The idea behind One-MCP is to index tools in a vector store and load them on demand, keeping the LLM’s context lean [2].
Relational modeling keeps things deterministic. The DuckDB piece shows how turning movies into tables enables structured queries and joins that stay predictable at scale [3].
• Query types — semantic retrieval and natural-language prompts benefit from embeddings; exact lookups and joins go to tabular schemas [1][3]. • Costs — embedding vectors and vector stores add storage and compute; on-demand loading can cut token costs by avoiding upfront context [2]. • Tooling implications — MCP-style ecosystems gain from on-demand loading via semantic indices, reducing context bloat in practice [2].
Bottom line: pick your architecture by what you need to query and how much you want to pay for context. These patterns are evolving as tooling tightens the link between data models and access patterns. In practice, teams may combine approaches—embedding for discovery with relational tables for exact reporting.
References
What are vector databases, and why do we need them?
Explores vector databases technology, benefits, and rationale for their use in modern data tasks
View sourceShow HN: One-MCP: Unlimited tools MCP server without context bloat
Open-source MCP server enabling semantic tool search and on-demand loading from a vector store to reduce context size and costs
View sourceRelational Charades: Turning Movies into Tables
Demonstrates turning movies into tables using relational databases, illustrating modeling techniques and SQL queries.
View source