A provocative post argues pure LLMs aren’t enough to reach AGI, and it says Rich Sutton has jumped off the bus. It also frames Gary Marcus as a steadfast voice challenging language-only AI in a cross-discipline debate [1].
The Hot Take - Hot take: pure LLMs can dazzle with language tasks, but critics insist they lack real-world grounding. The piece positions a hinge: blend world-model reasoning with language to push toward robust intelligence [1]. The discussion isn't about a single model—it's about an architecture that uses world knowledge to guide reasoning [1].
World Models vs Language - World-model advocates say you need more than syntax to navigate the real world. That means models would simulate consequences and plan actions, not just string together plausible phrases [1].
Implications for AGI Trajectories & Product Design - Implications for AGI trajectories and product design? Blending world models with language could steer future AI toward grounded decision-making, better safety, and cross-domain usefulness, according to the thread [1]. For product design, teams might rethink how AI assistants handle ambiguity, safety, and cross-domain grounding [1].
Closing thought - The thread keeps returning to a simple question: how do we build AI that understands the world, not just language [1]
References
Game over for pure LLMs. Even Rich Sutton has gotten off the bus
Debate over LLM limits, AGI paths, blending world models with reasoning beyond pure language models Marcus Sutton LeCun
View source