Are LLMs just prediction machines, or do they understand things in a meaningful way? Post 1 argues the 'prediction machine' label misses the broader picture, citing Mark Russinovic’s video and a painter analogy [1]. Post 3 flags the fight over exposing reasoning content, and Post 2 adds a different voice into the mix [3][2].
Prediction vs Practice Post 1 says calling an LLM a prediction machine is like calling a master painter a brushstroke predictor [1]. It frames LLMs as next-token predictors, capable of learning to run calculations, and even notes latent-space recall [1]. It also warns math is tricky for LLMs, unless you switch to a 'thinking mode' for math tasks [1]. Mark Russinovic’s video helps this point land [1].
RL vs. LLMs? Richard Sutton, hailed as the father of reinforcement learning, is skeptical that LLMs are the endgame [2]. He argues sensation-action-reward works in animals but doesn’t neatly map to language models; pursuing rewards in the real world can stall learning, whereas digital RL can roam more freely but may miss real-world constraints [2].
Exposing chain-of-thought? ChatGPT won’t let you propagate reasoning content to clients; it blocks exposing chain-of-thought even on open-source setups [3]. That stance, echoed by OpenAI, highlights the tension between surfacing insights and guarding internal reasoning [3].
Bottom line: framing choices shape expectations for accuracy and reliability, from math to problem-solving—this debate isn’t going away any time soon.
References
Calling an LLM a prediction machine is like calling a master painter a brushstroke predictor
Debates whether LLMs predict tokens or understand; compares human-like thinking, calculation, latent space; mentions Claude, math tasks, training objective, accuracy
View sourceRichard Sutton – Father of Reinforced Learning thinks LLMs are a dead end
Sutton argues LLMs are a dead end; advocates sensation-action-reward RL, author critiques RL in real-world settings and rewards not sufficient
View sourceChatGPT won't let you build an LLM server that passes through reasoning content
Discusses CoT filters, OpenAI prompts, open-source LLMs, local servers, and opinions on exposing reasoning content.
View source