Deep Dive - Frontier AI with Dr. Jerry A. Smith

Why AI Hallucinates: The Math OpenAI Got Right and the Politics They Ignored


Listen Later

Medium: https://medium.com/@jsmith0475/why-ai-hallucinates-the-math-openai-got-right-and-the-politics-they-ignored-1802138739f5
The article, by Dr. Jerry A. Smith, explores the multifaceted nature of AI hallucinations, arguing that they are not merely technical glitches but also socio-technical constructs. It highlights two key perspectives: first, Kalai et al. (2025) statistically explain why hallucinations are mathematically inevitable due to training and evaluation methods, advocating for rewarding model abstention when uncertain. Second, Smith (2025) introduces a Kantian framework, positing that the definition of a "hallucination" is inherently subjective and shaped by human evaluative choices, including benchmarks that embed specific cultural and political values. The text ultimately calls for a move beyond a "neutrality myth" in AI evaluation, advocating for multi-perspective assessments and the democratization of benchmark governance to ensure AI systems are more accountable and reflective of diverse human realities.
...more
View all episodesView all episodes
Download on the App Store

Deep Dive - Frontier AI with Dr. Jerry A. SmithBy Dr. Jerry A. Smith