BioTicTec Talks

Why Language Models Hallucinate


Listen Later

Explore why large language models “hallucinate” — from next‑word prediction and uncertainty to dataset gaps, decoding choices, and misaligned incentives — plus practical strategies to reduce false but confident outputs in real‑world use.

References:

1- [2509.04664] Why Language Models Hallucinate

...more
View all episodesView all episodes
Download on the App Store

BioTicTec TalksBy Nick