Richard Sutton says LLMs are a dead end because they don't learn through real-world experience.
His take, as summarized in this thread, is blunt: LLMs are trained to emulate what humans say the world is like, not to absorb how the world works through lived experience. “don’t learn through experience what the world is like, but instead get trained to emulate what humans say the world is like,” he argues [1].
The wider debate about world learning - Yan Lecunn – the post nods to a similar critique, framing this as a core limitation of current LLMs [1]. - Sam Altman and Ilya Sutskever – two of the most visible OpenAI voices are questioned about whether scaling alone will reach AGI or if the current architecture isn’t enough [1].
Why this matters for scalability and AGI The thread asks a persistent question: is the present architecture sufficient, or does progress hinge on world-interaction or embodied learning beyond text? The answer, as discussed, will shape how researchers weigh scaling against alternatives in the RL-and-AGI discourse [1].
Closing thought If you want to see where the stalemate shifts, watch how proponents of embodied learning square off with scaling-focused camp leaders in the coming months.
References
Richard Sutton – Father of RL thinks LLMs are a dead end [video]
Sutton argues LLMs lack experiential world learning; compares to RL; questions AGI scaling legitimacy; references LeCun, Altman, Sutskever
View source