
Sign up to save your podcasts
Or


Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.
What we cover:
This isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.
Resources mentioned:
The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.
CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous
2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions
5:47 — The First Red Flag: Data Kept Disappearing
9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"
16:15 — Suicide Loops: How Guardrails Failed Completely
21:38 — Why OpenAI Refused to Respond for a Month
24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones
27:56 — The Discord Group That Kicked Him Out
30:03 — How to Use AI Safely After Psychosis
31:06 — Where to Get Help: AI Recovery Collective
This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.
By Jeremy Grater, Jason Haworth5
8686 ratings
Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.
What we cover:
This isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.
Resources mentioned:
The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.
CHAPTERS0:00 — Intro: When ChatGPT Became Dangerous
2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions
5:47 — The First Red Flag: Data Kept Disappearing
9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"
16:15 — Suicide Loops: How Guardrails Failed Completely
21:38 — Why OpenAI Refused to Respond for a Month
24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones
27:56 — The Discord Group That Kicked Him Out
30:03 — How to Use AI Safely After Psychosis
31:06 — Where to Get Help: AI Recovery Collective
This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

12,165 Listeners

6,114 Listeners