
Sign up to save your podcasts
Or


Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.
Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.
Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!
00:12 Tilek’s Background
01:43 Life in Kyrgyzstan
04:32 Tilek’s Non-Doomer Position
07:12 Debating AI Doom Scenarios
13:49 Nuclear Weapons and AI Analogies
39:22 Privacy and Empathy in Human-AI Interaction
39:43 AI's Potential in Understanding Human Emotions
41:14 The Debate on AI's Empathy Capabilities
42:23 Quantum Effects and AI's Predictive Models
45:33 The Complexity of AI Control and Safety
47:10 Optimization Power: AI vs. Human Intelligence
48:39 The Risks of AI Self-Replication and Control
51:52 Historical Analogies and AI Safety Concerns
56:35 The Challenge of Embedding Safety in AI Goals
01:02:42 The Future of AI: Control, Optimization, and Risks
01:15:54 The Fragility of Security Systems
01:16:56 Debating AI Optimization and Catastrophic Risks
01:18:34 The Outcome Pump Thought Experiment
01:19:46 Human Persuasion vs. AI Control
01:21:37 The Crux of Disagreement: Robustness of AI Goals
01:28:57 Slow vs. Fast AI Takeoff Scenarios
01:38:54 The Importance of AI Alignment
01:43:05 Conclusion
Follow Tilek
x.com/tilek
Links
I referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks Like
I also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring.
By Liron Shapira4.1
99 ratings
Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.
Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.
Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!
00:12 Tilek’s Background
01:43 Life in Kyrgyzstan
04:32 Tilek’s Non-Doomer Position
07:12 Debating AI Doom Scenarios
13:49 Nuclear Weapons and AI Analogies
39:22 Privacy and Empathy in Human-AI Interaction
39:43 AI's Potential in Understanding Human Emotions
41:14 The Debate on AI's Empathy Capabilities
42:23 Quantum Effects and AI's Predictive Models
45:33 The Complexity of AI Control and Safety
47:10 Optimization Power: AI vs. Human Intelligence
48:39 The Risks of AI Self-Replication and Control
51:52 Historical Analogies and AI Safety Concerns
56:35 The Challenge of Embedding Safety in AI Goals
01:02:42 The Future of AI: Control, Optimization, and Risks
01:15:54 The Fragility of Security Systems
01:16:56 Debating AI Optimization and Catastrophic Risks
01:18:34 The Outcome Pump Thought Experiment
01:19:46 Human Persuasion vs. AI Control
01:21:37 The Crux of Disagreement: Robustness of AI Goals
01:28:57 Slow vs. Fast AI Takeoff Scenarios
01:38:54 The Importance of AI Alignment
01:43:05 Conclusion
Follow Tilek
x.com/tilek
Links
I referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks Like
I also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring.

2,424 Listeners

289 Listeners

89 Listeners

488 Listeners

132 Listeners

90 Listeners

133 Listeners

50 Listeners

96 Listeners

60 Listeners

560 Listeners

151 Listeners

9 Listeners

41 Listeners

133 Listeners