
Sign up to save your podcasts
Or
Yann LeCun is perhaps the most prominent critic of the “LessWrong view” on AI safety, the only one of the three "godfathers of AI" to not acknowledge the risks of advanced AI. So, when he recently appeared on the Lex Fridman podcast, I listened with the intent to better understand his position. LeCun came across as articulate / thoughtful[1]. Though I don't agree with it all, I found a lot worth sharing.
Most of this post consists of quotes from the transcript, where I’ve bolded the most salient points. There are also a few notes from me as well as a short summary at the end.
Limitations of Autoregressive LLMs
Lex Fridman (00:01:52) You've said that autoregressive LLMs are not the way we’re going to make progress towards superhuman intelligence. These are the large language models like GPT-4, like Llama 2 and 3 soon and so on. [...]
---
Outline:
(00:42) Limitations of Autoregressive LLMs
(05:38) Grounding / Embodiment
(05:41) Richness of Interaction with the Real World
(10:32) Language / Video and Bandwidth
(15:09) Hierarchical Planning
(17:37) Skepticism of Autoregressive LLMs
(32:08) RL(HF)
(36:31) Bias / Open Source
(39:15) Business and Open Source
(41:13) Safety of (Current) LLMs
(43:23) LLaMAs
(46:29) GPUs vs the Human Brain
(49:04) What does AGI / AMI Look Like?
(51:29) AI Doomers
(57:25) What Does a World with AGI Look Like (Especially Re Safety)?
(01:03:08) Big Companies and AI
(01:05:18) Hope for the Future of Humanity
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Yann LeCun is perhaps the most prominent critic of the “LessWrong view” on AI safety, the only one of the three "godfathers of AI" to not acknowledge the risks of advanced AI. So, when he recently appeared on the Lex Fridman podcast, I listened with the intent to better understand his position. LeCun came across as articulate / thoughtful[1]. Though I don't agree with it all, I found a lot worth sharing.
Most of this post consists of quotes from the transcript, where I’ve bolded the most salient points. There are also a few notes from me as well as a short summary at the end.
Limitations of Autoregressive LLMs
Lex Fridman (00:01:52) You've said that autoregressive LLMs are not the way we’re going to make progress towards superhuman intelligence. These are the large language models like GPT-4, like Llama 2 and 3 soon and so on. [...]
---
Outline:
(00:42) Limitations of Autoregressive LLMs
(05:38) Grounding / Embodiment
(05:41) Richness of Interaction with the Real World
(10:32) Language / Video and Bandwidth
(15:09) Hierarchical Planning
(17:37) Skepticism of Autoregressive LLMs
(32:08) RL(HF)
(36:31) Bias / Open Source
(39:15) Business and Open Source
(41:13) Safety of (Current) LLMs
(43:23) LLaMAs
(46:29) GPUs vs the Human Brain
(49:04) What does AGI / AMI Look Like?
(51:29) AI Doomers
(57:25) What Does a World with AGI Look Like (Especially Re Safety)?
(01:03:08) Big Companies and AI
(01:05:18) Hope for the Future of Humanity
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
461 Listeners