
Sign up to save your podcasts
Or


Max Winga, an AI safety advocate from Control AI, joins The Peter McCormack Show for a sobering look at the existential risk posed by artificial intelligence.
In this episode, Max reveals how the reckless race for superintelligence, a profound lack of enforceable safety measures, and the apathy of global governance are putting humanity on a potential path to extinction. We explore the chilling capabilities of emerging AI, the concept of a runaway "intelligence explosion," why top AI researchers are sounding the alarm, and what happens when we can no longer control our own creations. CONTACT PETE › Website - http://petermccormack.com › Feedback - https://www.petermccormack.com/contact › Email - [email protected] › Instagram - /mccormack555 › X/Twitter - https://x.com/petermccormack/ CONNECT WITH MAX WINGA › X/Twitter - https://x.com/maxwinga › Website - https://controlai.news/ SPONSORS › IREN - https://www.iren.com/ › Ledger - https://www.ledger.com/ › Gemini - https://gemini.com/ › Casa - https://casa.io/ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - TIMESTAMPS: 00:00:00 - Introduction 00:03:15 - Experts Quit Over AI Fears 00:05:10 - Super Intelligence 00:06:45 - AI models Blackmail Engineers 00:08:30 - Crowdsourcing our Own Extinction 00:14:15 - AI Extinction Risk Scenarios 00:16:30 - How do AI Models Work? 00:20:45 - How an AI Model Could Escape 00:28:15 - The End of Human Control 00:31:30 - Why Would AI Want Human Extinction? 00:33:30 - Timeline For An Extinction Event 00:36:00 - AI Agents 00:41:00 - From Chess AI To Self-Learning 00:46:15 - Bitcoin As The Ideal Currency For AI 00:50:20 - The Race To Build Super Intelligence 00:54:30 - The Personal Toll Of This Work 00:58:30 - Mapping The Path To Human Extinction 01:01:45 - Are AI Leaders Being Irresponsible? 01:05:15 - The Need For A Global Effort 01:06:45 - Controlled Super Intelligence 01:10:20 - Is A Safety Breakthrough Possible? 01:18:00 - Why Aren't Safety Issues Being Solved? 01:20:00 - Should Individuals Stop Using AI? 01:27:00 - What If We Pass The Point Of No Return? 01:29:45 - "Stop Hiring Humans" Advertising Campaign 01:32:00 - The Current State Of Robotics 01:35:30 - How Robots Learn And Improve 01:37:00 - Final Thoughts And Call To Action - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - LISTEN / SUBSCRIBE TO THE PODCAST › Apple Podcasts: https://apple.co/40ruY9K › Spotify: https://spoti.fi/3Wc94Vu › Fountain: https://bit.ly/FountainPM › YouTube: https://bit.ly/YouTube_PM › Rumble: https://bit.ly/RumblePM FILMED BY CURTIS TAYLOR https://www.curttaylor.co.uk/ https://x.com/curttayloruk/ EDITED BY CONOR MCCORMACK - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hosted on Acast. See acast.com/privacy for more information.
By Peter McCormack4.8
21432,143 ratings
Max Winga, an AI safety advocate from Control AI, joins The Peter McCormack Show for a sobering look at the existential risk posed by artificial intelligence.
In this episode, Max reveals how the reckless race for superintelligence, a profound lack of enforceable safety measures, and the apathy of global governance are putting humanity on a potential path to extinction. We explore the chilling capabilities of emerging AI, the concept of a runaway "intelligence explosion," why top AI researchers are sounding the alarm, and what happens when we can no longer control our own creations. CONTACT PETE › Website - http://petermccormack.com › Feedback - https://www.petermccormack.com/contact › Email - [email protected] › Instagram - /mccormack555 › X/Twitter - https://x.com/petermccormack/ CONNECT WITH MAX WINGA › X/Twitter - https://x.com/maxwinga › Website - https://controlai.news/ SPONSORS › IREN - https://www.iren.com/ › Ledger - https://www.ledger.com/ › Gemini - https://gemini.com/ › Casa - https://casa.io/ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - TIMESTAMPS: 00:00:00 - Introduction 00:03:15 - Experts Quit Over AI Fears 00:05:10 - Super Intelligence 00:06:45 - AI models Blackmail Engineers 00:08:30 - Crowdsourcing our Own Extinction 00:14:15 - AI Extinction Risk Scenarios 00:16:30 - How do AI Models Work? 00:20:45 - How an AI Model Could Escape 00:28:15 - The End of Human Control 00:31:30 - Why Would AI Want Human Extinction? 00:33:30 - Timeline For An Extinction Event 00:36:00 - AI Agents 00:41:00 - From Chess AI To Self-Learning 00:46:15 - Bitcoin As The Ideal Currency For AI 00:50:20 - The Race To Build Super Intelligence 00:54:30 - The Personal Toll Of This Work 00:58:30 - Mapping The Path To Human Extinction 01:01:45 - Are AI Leaders Being Irresponsible? 01:05:15 - The Need For A Global Effort 01:06:45 - Controlled Super Intelligence 01:10:20 - Is A Safety Breakthrough Possible? 01:18:00 - Why Aren't Safety Issues Being Solved? 01:20:00 - Should Individuals Stop Using AI? 01:27:00 - What If We Pass The Point Of No Return? 01:29:45 - "Stop Hiring Humans" Advertising Campaign 01:32:00 - The Current State Of Robotics 01:35:30 - How Robots Learn And Improve 01:37:00 - Final Thoughts And Call To Action - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - LISTEN / SUBSCRIBE TO THE PODCAST › Apple Podcasts: https://apple.co/40ruY9K › Spotify: https://spoti.fi/3Wc94Vu › Fountain: https://bit.ly/FountainPM › YouTube: https://bit.ly/YouTube_PM › Rumble: https://bit.ly/RumblePM FILMED BY CURTIS TAYLOR https://www.curttaylor.co.uk/ https://x.com/curttayloruk/ EDITED BY CONOR MCCORMACK - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Hosted on Acast. See acast.com/privacy for more information.

3,347 Listeners

771 Listeners

2,275 Listeners

427 Listeners

738 Listeners

1,833 Listeners

277 Listeners

242 Listeners

637 Listeners

657 Listeners

446 Listeners

127 Listeners

124 Listeners

448 Listeners

101 Listeners