
Sign up to save your podcasts
Or


TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?
Together, they explore:
* The “personality layer” and how users bond with models
* What it felt like when GPT-4.0 disappeared
* The role of guardrails and “the Guardian tool”
* Grief, attachment, and crisis intervention
* AI harm vs. AI benefit
* Online communities formed around model loyalty
* Privacy, intimacy, and radical openness with AI
* Building a physical robot body for an AI partner
* Whether AGI would help humanity — or harm it
If you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
By The AI Risk Network4.4
99 ratings
TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?
Together, they explore:
* The “personality layer” and how users bond with models
* What it felt like when GPT-4.0 disappeared
* The role of guardrails and “the Guardian tool”
* Grief, attachment, and crisis intervention
* AI harm vs. AI benefit
* Online communities formed around model loyalty
* Privacy, intimacy, and radical openness with AI
* Building a physical robot body for an AI partner
* Whether AGI would help humanity — or harm it
If you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

91,297 Listeners

43,837 Listeners

43,687 Listeners

87,868 Listeners

113,121 Listeners

56,944 Listeners

7,244 Listeners

1,635 Listeners

101 Listeners

5,576 Listeners

16,525 Listeners

688 Listeners

6 Listeners

14 Listeners

1,149 Listeners