
Sign up to save your podcasts
Or


Hundreds of public figures â from Steve Wozniak to Prince Harry â just signed a petition demanding a global ban on AI superintelligence. Their fear? That super-AI could outthink us, escape our control, and maybe even spell the end of humanity.
I get it. The Skynet comparisons. The doomsday bunkers. The "pause everything until it's safe" approach. On the surface, it sounds reasonable.
But here's the hard truth: If we don't build it, someone else will â and you better pray they believe in freedom.
đ§ Timestamps:00:00 - Intro: "Prince Harry wants to ban superintelligent AI?" 02:30 - What the open letter actually says 05:00 - The real fears behind the ban movement 07:15 - Why bans might backfire (China, anyone?) 09:20 - Historical analogies: cars, nukes, and Pandora's box 11:30 - Who benefits from slowing AI down? 13:45 - Regulation vs. prohibition â the real solution 16:00 - The only thing scarier than ASI? Letting someone else build it first.
In this episode, I break down:
đš Why people are calling for a ban on superintelligent AI
đ€ The fears we should absolutely empathize with
đŁ Why banning it could actually make the threat worse
đ§ How we can build ASI safely â but only if we lead
đ Why some folks shouting "pause" might just be trying to protect their power
I don't side with blind acceleration. But I don't buy moral panic either. There's a middle path â innovate with oversight, lead with principles, and don't cede the future to authoritarian AI.
This one's unfiltered, unsponsored, and unapologetic. Let's go.
Contact Mark: @markfidelman on X
By Mark Fidelman4.9
3434 ratings
Hundreds of public figures â from Steve Wozniak to Prince Harry â just signed a petition demanding a global ban on AI superintelligence. Their fear? That super-AI could outthink us, escape our control, and maybe even spell the end of humanity.
I get it. The Skynet comparisons. The doomsday bunkers. The "pause everything until it's safe" approach. On the surface, it sounds reasonable.
But here's the hard truth: If we don't build it, someone else will â and you better pray they believe in freedom.
đ§ Timestamps:00:00 - Intro: "Prince Harry wants to ban superintelligent AI?" 02:30 - What the open letter actually says 05:00 - The real fears behind the ban movement 07:15 - Why bans might backfire (China, anyone?) 09:20 - Historical analogies: cars, nukes, and Pandora's box 11:30 - Who benefits from slowing AI down? 13:45 - Regulation vs. prohibition â the real solution 16:00 - The only thing scarier than ASI? Letting someone else build it first.
In this episode, I break down:
đš Why people are calling for a ban on superintelligent AI
đ€ The fears we should absolutely empathize with
đŁ Why banning it could actually make the threat worse
đ§ How we can build ASI safely â but only if we lead
đ Why some folks shouting "pause" might just be trying to protect their power
I don't side with blind acceleration. But I don't buy moral panic either. There's a middle path â innovate with oversight, lead with principles, and don't cede the future to authoritarian AI.
This one's unfiltered, unsponsored, and unapologetic. Let's go.
Contact Mark: @markfidelman on X

1,442 Listeners

1,094 Listeners

112,880 Listeners

69,751 Listeners

2,642 Listeners

9,957 Listeners

57,817 Listeners

5,494 Listeners

29,233 Listeners

200 Listeners

357 Listeners

594 Listeners

87 Listeners

29 Listeners

10 Listeners