AI Odyssey

AI's Guessing Game


Listen Later

Ever wondered why AI chatbots sometimes state things with complete confidence, only for you to find out it's completely wrong? This phenomenon, known as "hallucination," is a major roadblock to trusting AI. A recent paper from OpenAI explores why this happens, and the answer is surprisingly simple: we're training them to be good test-takers rather than honest partners.


This description is based on the paper "Why Language Models Hallucinate" by authors Adam Tauman Kalai, Ofir Nachum, Santosh S. Vempala, and Edwin Zhang. Content was generated using Google's NotebookLM.


Link to the original paper: https://openai.com/research/why-language-models-hallucinate


...more
View all episodesView all episodes
Download on the App Store

AI OdysseyBy Anlie Arnaudy, Daniel Herbera and Guillaume Fournier