Next in AI: Your Daily News Podcast

OpenAI: Why LLM Hallucinates and How Our Tests Make It Worse


Listen Later

Why do AI chatbots confidently make up facts?

This podcast explores the surprising reasons language models 'hallucinate'. We'll uncover how these plausible falsehoods originate from statistical errors during pretraining and persist because evaluations reward guessing over acknowledging uncertainty. Learn why models are optimized to be good test-takers, much like students guessing on an exam, and what it takes to build more trustworthy AI systems.

...more
View all episodesView all episodes
Download on the App Store

Next in AI: Your Daily News PodcastBy Next in AI