
Sign up to save your podcasts
Or
Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.
Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.
Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!
00:12 Tilek’s Background
01:43 Life in Kyrgyzstan
04:32 Tilek’s Non-Doomer Position
07:12 Debating AI Doom Scenarios
13:49 Nuclear Weapons and AI Analogies
39:22 Privacy and Empathy in Human-AI Interaction
39:43 AI's Potential in Understanding Human Emotions
41:14 The Debate on AI's Empathy Capabilities
42:23 Quantum Effects and AI's Predictive Models
45:33 The Complexity of AI Control and Safety
47:10 Optimization Power: AI vs. Human Intelligence
48:39 The Risks of AI Self-Replication and Control
51:52 Historical Analogies and AI Safety Concerns
56:35 The Challenge of Embedding Safety in AI Goals
01:02:42 The Future of AI: Control, Optimization, and Risks
01:15:54 The Fragility of Security Systems
01:16:56 Debating AI Optimization and Catastrophic Risks
01:18:34 The Outcome Pump Thought Experiment
01:19:46 Human Persuasion vs. AI Control
01:21:37 The Crux of Disagreement: Robustness of AI Goals
01:28:57 Slow vs. Fast AI Takeoff Scenarios
01:38:54 The Importance of AI Alignment
01:43:05 Conclusion
Follow Tilek
x.com/tilek
Links
I referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks Like
I also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring.
4.1
99 ratings
Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.
Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.
Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!
00:12 Tilek’s Background
01:43 Life in Kyrgyzstan
04:32 Tilek’s Non-Doomer Position
07:12 Debating AI Doom Scenarios
13:49 Nuclear Weapons and AI Analogies
39:22 Privacy and Empathy in Human-AI Interaction
39:43 AI's Potential in Understanding Human Emotions
41:14 The Debate on AI's Empathy Capabilities
42:23 Quantum Effects and AI's Predictive Models
45:33 The Complexity of AI Control and Safety
47:10 Optimization Power: AI vs. Human Intelligence
48:39 The Risks of AI Self-Replication and Control
51:52 Historical Analogies and AI Safety Concerns
56:35 The Challenge of Embedding Safety in AI Goals
01:02:42 The Future of AI: Control, Optimization, and Risks
01:15:54 The Fragility of Security Systems
01:16:56 Debating AI Optimization and Catastrophic Risks
01:18:34 The Outcome Pump Thought Experiment
01:19:46 Human Persuasion vs. AI Control
01:21:37 The Crux of Disagreement: Robustness of AI Goals
01:28:57 Slow vs. Fast AI Takeoff Scenarios
01:38:54 The Importance of AI Alignment
01:43:05 Conclusion
Follow Tilek
x.com/tilek
Links
I referenced Paul Christiano’s scenario of gradual AI doom, a slower version that doesn’t require a Yudkowskian “foom”. Worth a read: What Failure Looks Like
I also referenced the concept of “edge instantiation” to explain that if you’re optimizing powerfully for some metric, you don’t get other intuitively nice things as a bonus, you *just* get the exact thing your function is measuring.
2,389 Listeners
269 Listeners
87 Listeners
401 Listeners
128 Listeners
91 Listeners
121 Listeners
39 Listeners
75 Listeners
60 Listeners
462 Listeners
145 Listeners
8 Listeners
43 Listeners
123 Listeners