
Sign up to save your podcasts
Or


Why do AI chatbots confidently make up facts?
This podcast explores the surprising reasons language models 'hallucinate'. We'll uncover how these plausible falsehoods originate from statistical errors during pretraining and persist because evaluations reward guessing over acknowledging uncertainty. Learn why models are optimized to be good test-takers, much like students guessing on an exam, and what it takes to build more trustworthy AI systems.
By Next in AIWhy do AI chatbots confidently make up facts?
This podcast explores the surprising reasons language models 'hallucinate'. We'll uncover how these plausible falsehoods originate from statistical errors during pretraining and persist because evaluations reward guessing over acknowledging uncertainty. Learn why models are optimized to be good test-takers, much like students guessing on an exam, and what it takes to build more trustworthy AI systems.