Build Wiz AI Show

😵‍💫 Why Language Models Hallucinate


Listen Later

In this episode, we delve into why language models "hallucinate," generating plausible yet incorrect information instead of admitting uncertainty. We'll explore how these overconfident falsehoods arise from the statistical objectives minimized during pretraining and are further reinforced by current evaluation methods that reward guessing over expressing doubt. Join us as we uncover the socio-technical factors behind this persistent problem and discuss proposed solutions to foster more trustworthy AI systems.

...more
View all episodesView all episodes
Download on the App Store

Build Wiz AI ShowBy Build Wiz AI