Inside the Black Box: Cracking AI and Deep Learning

Hallucinations, Interpretability, and the Seahorse Mirage


Listen Later

This episode dives into why advanced language models still generate hallucinations, how interpretability tools help us uncover their hidden workings, and what the seahorse emoji teaches us about model and human reasoning. Arshavir connects groundbreaking research, practical business importance, and the statistical quirks that shape AI's version of 'truth.'
...more
View all episodesView all episodes
Download on the App Store

Inside the Black Box: Cracking AI and Deep LearningBy Jellypod