
Sign up to save your podcasts
Or


AI reasoning models have emerged in the past year as a beacon of hope for large language models (LLMs), with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems.
However, a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What's going on here? And does it mean reasoning models are fundamentally flawed?
In this episode, Rory Bathgate speaks to ITPro's news and analysis editor Ross Kelly to explain some of the report's key findings and what it means for the future of AI development.
By ITPro5
11 ratings
AI reasoning models have emerged in the past year as a beacon of hope for large language models (LLMs), with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems.
However, a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What's going on here? And does it mean reasoning models are fundamentally flawed?
In this episode, Rory Bathgate speaks to ITPro's news and analysis editor Ross Kelly to explain some of the report's key findings and what it means for the future of AI development.

890 Listeners

78 Listeners

1,916 Listeners

1,630 Listeners

3,682 Listeners

35 Listeners

232 Listeners

175 Listeners

674 Listeners

228 Listeners

134 Listeners

3,115 Listeners

858 Listeners

193 Listeners

162 Listeners