This is a link post.
This linkpost contains a lightly-edited transcript of highlights of my recent AI x-risk debate with Robin Hanson, and a written version of what I said in the post-debate analysis episode of my Doom Debates podcast.
Introduction
I've poured over my recent 2-hour AI x-risk debate with Robin Hanson to clip the highlights and write up a post-debate analysis, including new arguments I thought of after the debate was over.
I've read everybody's feedback on YouTube and Twitter, and the consensus seems to be that it was a good debate. There were many topics brought up that were kind of deep cuts into stuff that Robin says.
On the critical side, people were saying that it came off more like an interview than a debate. I asked Robin a lot of questions about how he sees the world and I didn't "nail" him. And people were [...]
---
Outline:
(00:26) Introduction
(06:47) Robins AI Timelines
(09:46) Culture vs. Intelligence
(13:23) Innovation Accumulation vs. Intelligence
(16:13) Optimization = Collecting + Spreading?
(17:28) Worldwide Growth
(20:29) Extrapolating Robust Trends
(22:55) Seeing Optimization-Work
(24:58) Exponential Growth With Respect To...
(27:34) Can Robins Methodology Notice Foom In Time?
(38:06) Foom Argument As Conjunction
(41:12) Headroom Above Human Intelligence
(46:37) More on Culture vs. Intelligence
(51:42) The Goal-Completeness Dimension
(57:12) AI Doom Scenario
(01:02:36) Corporations as Superintelligences
(01:06:18) Monitoring for Signs of Superintelligence
(01:08:07) Will AI Societys Laws Protect Humans?
(01:11:58) Feasibility of ASI Alignment
(01:14:34) Robins Warning Shot
(01:16:10) The Cruxes
(01:17:06) How to Present the AI Doom Argument
(01:19:13) About Doom Debates
The original text contained 3 images which were described by AI.
---