
Sign up to save your podcasts
Or


This episode explore the phenomenon of "hallucinations" in language models, defining them as confidently generated but false statements. It argue that current training and evaluation methods inadvertently incentivize models to guess rather than admit uncertainty, comparing it to students guessing on a multiple-choice test to avoid a zero score.
By Fourth MindThis episode explore the phenomenon of "hallucinations" in language models, defining them as confidently generated but false statements. It argue that current training and evaluation methods inadvertently incentivize models to guess rather than admit uncertainty, comparing it to students guessing on a multiple-choice test to avoid a zero score.