Data Science Bulletin

DSB Podcast #18 [CZ] - Reasoning Failures in LLM


Listen Later

In this episode we discuss the research paper 'Large Language Model Reasoning Failures,' which was published in Transactions on Machine Learning Research. We focus on a categorization framework for identifying situations where large language models fail at reasoning, pointing out that the goal of the paper is not to evaluate whether these models actually think, but rather to understand their limitations compared to human reasoning. The episode also highlights that these errors can be similar to cognitive biases in humans and discusses possible approaches to overcoming them.


Blog post with detailed description.

...more
View all episodesView all episodes
Download on the App Store

Data Science BulletinBy Data Science Bulletin