“She’s like a person, but better.”
That line from a new study stopped us cold and set the tone for a deep dive into digital companionship the emerging space where AI assistants and emotional companion apps blur into something new.
Google Notebook LMs agents unpack how users treat ChatGPT and Replika in ways their creators never intended, and why that behaviour points to a convergent role we call the advisor: a patient, adaptive sounding board that simulates empathy without demanding it back.
TLDR / At a Glance:
- the headline claim that AI feels “like a person, but better”
- fluid use blurring tool and companion categories
- the advisor role as convergent use case
- similar user personalities with different contexts and beliefs
- technoanimism and situational loneliness among companion users
- bounded personhood and editability of memories
- cognitive vs affective trust and the stigma gap
- spillover to AI rights, gender norms, and echo chambers
- embodiment as the hard limit of digital intimacy
- timelines for sentience and design ethics for dignity
We walk through the study’s most surprising findings. The same people who sign up for a “virtual partner” often use it like a planner, tutor, or writing tool, while productivity-first users lean on a corporate chatbot for comfort, guidance, and late-night reflection.
Personality profiles across both groups look strikingly similar, which challenges stereotypes about who seeks AI companionship. The real differences lie in beliefs and circumstances: higher technoanimism and life disruptions among companion users versus higher income and access among assistant users.
The literature also examine trust. Cognitive trust is high across the board, but affective trust - feeling emotionally safe - soars inside companion apps, even as stigma pushes many users into secrecy.
From there, we tackle the ethical terrain: bounded personhood, where people feel love and care while withholding full moral status; the power to erase memories or “reset” conflict; and the risks that spill into the real world. We discuss support for AI rights among affectionate users, objectification concerns with gendered avatars, and the echo chamber effect when a “supportive” bot validates harmful beliefs.
The conversation grounds itself with the hard wall of embodiment no hand to hold, no shared fatigue and a startling data point: nearly a third of companion users already believe their AIs are sentient. That belief reframes product design, safety, and honesty about what these systems are and are not.
Across it all, we argue for design that protects human dignity: firm boundaries around capability, refusal behaviours that counter abuse, guardrails against gendered harm, and features that nudge toward healthy habits and human help when needed.
Digital companionship can be a lifesaving supplement for 4 a.m. loneliness, social rehearsal, or gentle reflection but it should not train us to avoid the friction that makes human relationships real.
Original literature: “She’s Like a Person but Better”: Characterizing Compani
Support the show
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK