
Sign up to save your podcasts
Or
The second half of my 7 hour conversation with Carl Shulman is out!
My favorite part! And the one that had the biggest impact on my worldview.
Here, Carl lays out how an AI takeover might happen:
* AI can threaten mutually assured destruction from bioweapons,
* use cyber attacks to take over physical infrastructure,
* build mechanical armies,
* spread seed AIs we can never exterminate,
* offer tech and other advantages to collaborating countries, etc
Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:
* what is the far future best case scenario for humanity
* what it would look like to have AI make thousands of years of intellectual progress in a month
* how do we detect deception in superhuman models
* does space warfare favor defense or offense
* is a Malthusian state inevitable in the long run
* why markets haven't priced in explosive economic growth
* & much more
Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Catch part 1 here
Timestamps
(0:00:00) - Intro
(0:00:47) - AI takeover via cyber or bio
(0:32:27) - Can we coordinate against AI?
(0:53:49) - Human vs AI colonizers
(1:04:55) - Probability of AI takeover
(1:21:56) - Can we detect deception?
(1:47:25) - Using AI to solve coordination problems
(1:56:01) - Partial alignment
(2:11:41) - AI far future
(2:23:04) - Markets & other evidence
(2:33:26) - Day in the life of Carl Shulman
(2:47:05) - Space warfare, Malthusian long run, & other rapid fire
4.6
257257 ratings
The second half of my 7 hour conversation with Carl Shulman is out!
My favorite part! And the one that had the biggest impact on my worldview.
Here, Carl lays out how an AI takeover might happen:
* AI can threaten mutually assured destruction from bioweapons,
* use cyber attacks to take over physical infrastructure,
* build mechanical armies,
* spread seed AIs we can never exterminate,
* offer tech and other advantages to collaborating countries, etc
Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:
* what is the far future best case scenario for humanity
* what it would look like to have AI make thousands of years of intellectual progress in a month
* how do we detect deception in superhuman models
* does space warfare favor defense or offense
* is a Malthusian state inevitable in the long run
* why markets haven't priced in explosive economic growth
* & much more
Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Catch part 1 here
Timestamps
(0:00:00) - Intro
(0:00:47) - AI takeover via cyber or bio
(0:32:27) - Can we coordinate against AI?
(0:53:49) - Human vs AI colonizers
(1:04:55) - Probability of AI takeover
(1:21:56) - Can we detect deception?
(1:47:25) - Using AI to solve coordination problems
(1:56:01) - Partial alignment
(2:11:41) - AI far future
(2:23:04) - Markets & other evidence
(2:33:26) - Day in the life of Carl Shulman
(2:47:05) - Space warfare, Malthusian long run, & other rapid fire
1,008 Listeners
2,351 Listeners
1,683 Listeners
271 Listeners
87 Listeners
8,349 Listeners
92 Listeners
89 Listeners
106 Listeners
70 Listeners
68 Listeners
125 Listeners
419 Listeners
31 Listeners
103 Listeners