
Sign up to save your podcasts
Or


What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.
Together, they explore:
* Why AI safety must address real, present-day harms, not just abstract futures
* How burnout and mental resilience shape long-term movement success
* Why job displacement, youth harm, and data centers are political leverage points
* The limits of regulation without enforcement and public pressure
* How tipping points in public opinion actually form
* Why protests still matter—even when they’re small
* What it will take to build a global, durable AI safety movement
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
By The AI Risk Network4.4
99 ratings
What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.
Together, they explore:
* Why AI safety must address real, present-day harms, not just abstract futures
* How burnout and mental resilience shape long-term movement success
* Why job displacement, youth harm, and data centers are political leverage points
* The limits of regulation without enforcement and public pressure
* How tipping points in public opinion actually form
* Why protests still matter—even when they’re small
* What it will take to build a global, durable AI safety movement
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

90,826 Listeners

44,047 Listeners

43,589 Listeners

87,872 Listeners

113,502 Listeners

57,062 Listeners

7,266 Listeners

1,623 Listeners

98 Listeners

5,537 Listeners

16,436 Listeners

637 Listeners

6 Listeners

14 Listeners

1,161 Listeners