
Sign up to save your podcasts
Or


"The path to superintelligence - just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL-I think is complete bullshit. It's just never going to work."
After 12 years at Meta, Turing Award winner Yann LeCun is betting his legacy on a radically different vision of AI. In this conversation, he explains why Silicon Valley's obsession with scaling language models is a dead end, why the hardest problem in AI is reaching dog-level intelligence (not human-level), and why his new company AMI is building world models that predict in abstract representation space rather than generating pixels.
Timestamps(00:00:14) – Intro and welcome
(00:01:12) – AMI: Why start a company now?
(00:04:46) – Will AMI do research in the open?
(00:06:44) – World models vs LLMs
(00:09:44) – History of self-supervised learning
(00:16:55) – Siamese networks and contrastive learning
(00:25:14) – JEPA and learning in representation space
(00:30:14) – Abstraction hierarchies in physics and AI
(00:34:01) – World models as abstract simulators
(00:38:14) – Object permanence and learning basic physics
(00:40:35) – Game AI: Why NetHack is still impossible
(00:44:22) – Moravec's Paradox and chess
(00:55:14) – AI safety by construction, not fine-tuning
(01:02:52) – Constrained generation techniques
(01:04:20) – Meta's reorganization and FAIR's future
(01:07:31) – SSI, Physical Intelligence, and Wayve
(01:10:14) – Silicon Valley's "LLM-pilled" monoculture
(01:15:56) – China vs US: The open source paradox
(01:18:14) – Why start a company at 65?
(01:25:14) – The AGI hype cycle has happened 6 times before
(01:33:18) – Family and personal background
(01:36:13) – Career advice: Learn things with a long shelf life
(01:40:14) – Neuroscience and machine learning connections
(01:48:17) – Continual learning: Is catastrophic forgetting solved?
Music:
"Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
"Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
Changes: trimmed
AboutThe Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.
By Ravid Shwartz-Ziv & Allen Roush5
44 ratings
"The path to superintelligence - just train up the LLMs, train on more synthetic data, hire thousands of people to school your system in post-training, invent new tweaks on RL-I think is complete bullshit. It's just never going to work."
After 12 years at Meta, Turing Award winner Yann LeCun is betting his legacy on a radically different vision of AI. In this conversation, he explains why Silicon Valley's obsession with scaling language models is a dead end, why the hardest problem in AI is reaching dog-level intelligence (not human-level), and why his new company AMI is building world models that predict in abstract representation space rather than generating pixels.
Timestamps(00:00:14) – Intro and welcome
(00:01:12) – AMI: Why start a company now?
(00:04:46) – Will AMI do research in the open?
(00:06:44) – World models vs LLMs
(00:09:44) – History of self-supervised learning
(00:16:55) – Siamese networks and contrastive learning
(00:25:14) – JEPA and learning in representation space
(00:30:14) – Abstraction hierarchies in physics and AI
(00:34:01) – World models as abstract simulators
(00:38:14) – Object permanence and learning basic physics
(00:40:35) – Game AI: Why NetHack is still impossible
(00:44:22) – Moravec's Paradox and chess
(00:55:14) – AI safety by construction, not fine-tuning
(01:02:52) – Constrained generation techniques
(01:04:20) – Meta's reorganization and FAIR's future
(01:07:31) – SSI, Physical Intelligence, and Wayve
(01:10:14) – Silicon Valley's "LLM-pilled" monoculture
(01:15:56) – China vs US: The open source paradox
(01:18:14) – Why start a company at 65?
(01:25:14) – The AGI hype cycle has happened 6 times before
(01:33:18) – Family and personal background
(01:36:13) – Career advice: Learn things with a long shelf life
(01:40:14) – Neuroscience and machine learning connections
(01:48:17) – Continual learning: Is catastrophic forgetting solved?
Music:
"Kid Kodi" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
"Palms Down" — Blue Dot Sessions — via Free Music Archive — CC BY-NC 4.0.
Changes: trimmed
AboutThe Information Bottleneck is hosted by Ravid Shwartz-Ziv and Allen Roush, featuring in-depth conversations with leading AI researchers about the ideas shaping the future of machine learning.

1,932 Listeners

2,455 Listeners

1,091 Listeners

91 Listeners

301 Listeners

203 Listeners

9,942 Listeners

306 Listeners

96 Listeners

519 Listeners

132 Listeners

93 Listeners

617 Listeners

393 Listeners

36 Listeners