
Sign up to save your podcasts
Or


What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
Together, they explore:
* Why AI threatens near-term human agency more than long-term sci-fi extinction
* How Google Maps offers a chilling preview of AI’s effect on the human brain
* The difference between fast-thinking and slow-thinking — and why AI exploits it
* Why persuasive AI may outperform humans politically and psychologically
* How profit incentives, not intelligence, are driving the most dangerous outcomes
* Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
👉 Follow More of Jacob Ward’s Work:
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility
By The AI Risk Network4.4
99 ratings
What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
Together, they explore:
* Why AI threatens near-term human agency more than long-term sci-fi extinction
* How Google Maps offers a chilling preview of AI’s effect on the human brain
* The difference between fast-thinking and slow-thinking — and why AI exploits it
* Why persuasive AI may outperform humans politically and psychologically
* How profit incentives, not intelligence, are driving the most dangerous outcomes
* Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
👉 Follow More of Jacob Ward’s Work:
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
#AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility

90,853 Listeners

44,047 Listeners

43,622 Listeners

87,872 Listeners

113,502 Listeners

57,062 Listeners

7,266 Listeners

1,623 Listeners

98 Listeners

5,537 Listeners

16,436 Listeners

637 Listeners

6 Listeners

14 Listeners

1,161 Listeners