
Sign up to save your podcasts
Or


In this episode of Warning Shots, Jon, Michael, and Liron break down a bizarre AI-era clash: Marc Andreessen vs. the Pope.What started as a calm, ethical reminder from Pope Leo XIV turned into a viral moment when the billionaire VC mocked the post — then deleted his tweet after widespread backlash. Why does one of the most powerful voices in tech treat even mild calls for moral responsibility as an attack?
🔎 This conversation unpacks the deeper pattern:
* A16Z’s aggressive push for acceleration at any cost
* The culture of thin-skinned tech power and political influence
* Why dismissing risk has become a badge of honor in Silicon Valley
* How survivorship bias fuels delusional confidence around frontier AI
* Why this “Pope incident” is a warning shot for the public about who is shaping the future without their consent
We then pivot to a major capabilities update: MIT’s new SEAL framework, a step toward self-modifying AI. The team explains why this could be an early precursor to recursive self-improvement — the red line that makes existential risk real, not theoretical.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
Liron Shapira - Doom Debates
Michael - @lethal-intelligence
By The AI Risk NetworkIn this episode of Warning Shots, Jon, Michael, and Liron break down a bizarre AI-era clash: Marc Andreessen vs. the Pope.What started as a calm, ethical reminder from Pope Leo XIV turned into a viral moment when the billionaire VC mocked the post — then deleted his tweet after widespread backlash. Why does one of the most powerful voices in tech treat even mild calls for moral responsibility as an attack?
🔎 This conversation unpacks the deeper pattern:
* A16Z’s aggressive push for acceleration at any cost
* The culture of thin-skinned tech power and political influence
* Why dismissing risk has become a badge of honor in Silicon Valley
* How survivorship bias fuels delusional confidence around frontier AI
* Why this “Pope incident” is a warning shot for the public about who is shaping the future without their consent
We then pivot to a major capabilities update: MIT’s new SEAL framework, a step toward self-modifying AI. The team explains why this could be an early precursor to recursive self-improvement — the red line that makes existential risk real, not theoretical.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
Liron Shapira - Doom Debates
Michael - @lethal-intelligence