
Sign up to save your podcasts
Or


What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
Together, they explore:
* Why AI threatens near-term human agency more than long-term sci-fi extinction
* How Google Maps offers a chilling preview of AI’s effect on the human brain
* The difference between fast-thinking and slow-thinking — and why AI exploits it
* Why persuasive AI may outperform humans politically and psychologically
* How profit incentives, not intelligence, are driving the most dangerous outcomes
* Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
👉 Follow More of Jacob Ward’s Work:
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility
By The AI Risk Network4.4
88 ratings
What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
Together, they explore:
* Why AI threatens near-term human agency more than long-term sci-fi extinction
* How Google Maps offers a chilling preview of AI’s effect on the human brain
* The difference between fast-thinking and slow-thinking — and why AI exploits it
* Why persuasive AI may outperform humans politically and psychologically
* How profit incentives, not intelligence, are driving the most dangerous outcomes
* Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
👉 Follow More of Jacob Ward’s Work:
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility

26,348 Listeners

4,270 Listeners

590 Listeners

4,170 Listeners

1,612 Listeners

9,971 Listeners

95 Listeners

1,326 Listeners

517 Listeners

5,512 Listeners

554 Listeners

228 Listeners

631 Listeners

15 Listeners

1,105 Listeners