
Sign up to save your podcasts
Or
How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.
Note: this episode was recorded before this story (vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Topics we discuss, and timestamps:
- 0:00:36 - 'Reform' AI alignment
- 0:01:52 - Epistemology of AI risk
- 0:20:08 - Immediate problems and existential risk
- 0:24:35 - Aligning deceitful AI
- 0:30:59 - Stories of AI doom
- 0:34:27 - Language models
- 0:43:08 - Democratic governance of AI
- 0:59:35 - What would change Scott's mind
- 1:14:45 - Watermarking language model outputs
- 1:41:41 - Watermark key secrecy and backdoor insertion
- 1:58:05 - Scott's transition to AI research
- 2:03:48 - Theoretical computer science and AI alignment
- 2:14:03 - AI alignment and formalizing philosophy
- 2:22:04 - How Scott finds AI research
- 2:24:53 - Following Scott's research
The transcript: axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html
Links to Scott's things:
- Personal website: scottaaronson.com
- Book, Quantum Computing Since Democritus: amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/
- Blog, Shtetl-Optimized: scottaaronson.blog
Writings we discuss:
- Reform AI Alignment: scottaaronson.blog/?p=6821
- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974
4.4
88 ratings
How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI.
Note: this episode was recorded before this story (vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself.
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
Topics we discuss, and timestamps:
- 0:00:36 - 'Reform' AI alignment
- 0:01:52 - Epistemology of AI risk
- 0:20:08 - Immediate problems and existential risk
- 0:24:35 - Aligning deceitful AI
- 0:30:59 - Stories of AI doom
- 0:34:27 - Language models
- 0:43:08 - Democratic governance of AI
- 0:59:35 - What would change Scott's mind
- 1:14:45 - Watermarking language model outputs
- 1:41:41 - Watermark key secrecy and backdoor insertion
- 1:58:05 - Scott's transition to AI research
- 2:03:48 - Theoretical computer science and AI alignment
- 2:14:03 - AI alignment and formalizing philosophy
- 2:22:04 - How Scott finds AI research
- 2:24:53 - Following Scott's research
The transcript: axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html
Links to Scott's things:
- Personal website: scottaaronson.com
- Book, Quantum Computing Since Democritus: amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/
- Blog, Shtetl-Optimized: scottaaronson.blog
Writings we discuss:
- Reform AI Alignment: scottaaronson.blog/?p=6821
- Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974
26,462 Listeners
2,389 Listeners
1,780 Listeners
298 Listeners
105 Listeners
4,136 Listeners
87 Listeners
287 Listeners
87 Listeners
389 Listeners
243 Listeners
75 Listeners
60 Listeners
145 Listeners
123 Listeners