
Sign up to save your podcasts
Or


This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic.
Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder.
Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking."
Read the paper: The Illusion of Thinking
Read the response: The Illusion of the Illusion of Thinking
Explore more AI research and sign up for future readings
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
By Arize AI5
1313 ratings
This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic.
Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder.
Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking."
Read the paper: The Illusion of Thinking
Read the response: The Illusion of the Illusion of Thinking
Explore more AI research and sign up for future readings
Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.

303 Listeners

332 Listeners

226 Listeners

209 Listeners

199 Listeners

306 Listeners

93 Listeners

507 Listeners

136 Listeners

94 Listeners

151 Listeners

224 Listeners

599 Listeners

36 Listeners

40 Listeners