AI agents from your own data are moving from demos to production-ready tools. A Bot The Builder-style setup uses a FAISS-backed vector store to turn PDFs, docs, and URLs into a deployable agent—accessible via an API. Embeddings come from OpenAI or local models and are stitched together by a small FastAPI-based backend, so you can drop agents into dashboards or apps with minimal fuss. [1]
Upload content, generate embeddings, and spin up a memory-enabled agent. The stack is modular and provider-agnostic—from the vector DB to the embeddings and the UI—making it easy to reuse for client work or internal knowledge bases. [1]
On the Claude side, developers are exploring automatic data modeling. In a setup described by the discussion, Claude runs scripts and uses Playwright MCP to scrape sources, then creates data models and target stores like Postgres tables based on those models. This lets you dump the scraped data into the storages it creates, closing the loop from raw data to queryable tables. [3]
These moves push vector indexing into database workflows, blurring the line between SQL and embedding search. Natural-language prompts can steer data retrieval from Postgres data alongside embedding results, and everything can be API-driven. [1] [3]
Watch this space—vector-aware databases could redefine how you query data in 2025 and beyond.
References
Show HN: Build and deploy AI agents from your own data in under 60 seconds
Builds AI agents from data using FAISS embeddings; deployable via API; use cases: internal knowledge, client sites.
View sourceAsk HN: Claude Skills and Bs4 for intelligent scraping
Author tests Claude-assisted scraping to auto-model data, generate Postgres tables, and store results, seeking market feedback.
View source