
Sign up to save your podcasts
Or


Large Language Models (LLMs) seem to know everything—until they don’t. In this deep dive, we explore the fascinating phenomenon of AI hallucinations, where LLMs confidently generate false information. Why does this happen? Enter knowledge overshadowing, a cognitive trap that causes AI to prioritize dominant information while overlooking lesser-known facts.
We break down the groundbreaking log-linear law that predicts when LLMs are most likely to hallucinate and introduce KOD (Contrastive Decoding), a cutting-edge technique designed to make AI more truthful. Plus, we ask the big question: Should we always aim for perfect factuality in AI, or is there a place for creative generation?
Join us as we uncover what these AI mistakes reveal—not just about technology, but about the way human cognition works. If you're curious about the future of AI accuracy, misinformation, and ethics, this is an episode you won't want to miss!
Read more: https://arxiv.org/abs/2502.16143
By j15Large Language Models (LLMs) seem to know everything—until they don’t. In this deep dive, we explore the fascinating phenomenon of AI hallucinations, where LLMs confidently generate false information. Why does this happen? Enter knowledge overshadowing, a cognitive trap that causes AI to prioritize dominant information while overlooking lesser-known facts.
We break down the groundbreaking log-linear law that predicts when LLMs are most likely to hallucinate and introduce KOD (Contrastive Decoding), a cutting-edge technique designed to make AI more truthful. Plus, we ask the big question: Should we always aim for perfect factuality in AI, or is there a place for creative generation?
Join us as we uncover what these AI mistakes reveal—not just about technology, but about the way human cognition works. If you're curious about the future of AI accuracy, misinformation, and ethics, this is an episode you won't want to miss!
Read more: https://arxiv.org/abs/2502.16143