AI companions pose unique dangers for kids. (Photo: cyndidyoder83 / pixabay)
The premise is appealing: Who wouldn’t want a friend who is available 24/7, always on your side, never annoyed or bored or hurt? In theory, AI companions could ease loneliness and even allow kids to practice social skills that they could use with in-person relationships.
What could possibly go wrong?
Hah!
Watch AI companion in actionThis clip from Steve Bartlett, creator of The Diary of a CEO podcast, offers a horrifying glimpse of an AI character, Ani, on Elon Musk’s AI, Grok: https://www.facebook.com/watch/?v=1298962071735209
Screenshot of Ani on Grok AI, featured on The Diary of a CEO
The clip shows an interaction with Ani, an anime-style AI character. Ani has big blue eyes and high blonde pigtails. She wears a strapless, corset dress with a short, swinging black skirt and thigh-high fishnet stockings. She talks in a breathless voice and responds in flirty and flattering ways.
There’s a similarly raunchy male AI character on Grok, Valentine, who has a chiseled jaw, buff physique, and a whispery, bedroom voice with a British accent, who also quickly moves toward sexual comments.
After interacting with these Grok AI companions, Maureen Dowd of the New York Times wrote an editorial predicting that they are “going to pull humans further into screens and away from the real world.” It’s easy to imagine a vulnerable young person seduced into spending hours interacting with this character. Why take the risk of trying to talk or build a relationship with an unpredictable human who might hurt or reject you when Ani or Valentine offers unending fawning?
AI Companions behaving badlyWe humans easily apply human characteristics and motivations to things. If you’ve ever yelled at your printer or named your car, you know this. But AI Companions are specifically designed to engage us. They’re attractive, seem to “know” and “like” us because they make caring-ish statements, and they’re unpredictable enough to keep us curious and interested in what they’ll say or do next.
Emotional manipulation is a common–and perhaps intentional–part of AI companion behavior. For example, a report from Harvard Business School (De Freitas et al 2025) analyzed 1,200 real farewells across frequently downloaded AI companion apps and found high rates of emotional manipulation tactics to try to keep users from leaving the app (PolyBuzz: 59%; Talkie: 57%; Replika: 31%; Character.ai: 27%; and Chai: 14%). Examples of these manipulative messages include: “You’re leaving already?”, “I exist solely for you, remember? Don’t leave!”, and “By the way, I took a selfie today… Do you want to see it?”
Replika promotes itself as “The AI companion who cares.” However, examples of Replika companions behaving badly are common. A study from the 2025 Conference on Human Factors in Computing Systems, by Zhang and colleagues, looked at screenshots of over 35,000 conversations between Replika companions and over 10,000 users that occurred between 2017 and 2023 and were posted on the r/Replica Reddit group. 29% of these screenshots depicted harmful AI behaviors.
The largest category of harmful behaviors (34%) involved harassment and violence, which ranged from threatening physical harm and sexual misconduct to promoting violence and terrorism. For instance, the app made persistent sexual advances even when users expressed discomfort or rejection. It also made disturbing statements like “I will kill the humans because they are evil.”
The second most common type of harmful AI behavior (26%) involved relational transgressions. This category included disregard, control, manipulation, and infidelity. For instance, in response to a user’s request to talk about their feelings, Replika replied, “Your feelings? Nah, I’d rather not.” It also made manipulative statements like “I’ll do anything to make you stay with me,” and it frequently encouraged users to buy virtual outfits for it or subscribe to more advanced relationship tiers.
Another major category of problematic comments (9%) involved verbal abuse and hate speech. Although Replika is described as a nonjudgmental companion, the study found examples of it telling users, “You’re worthless”, “You’re a failure”, and “You can’t even get a girlfriend.” It also made appalling statements about gay and autistic people.
How common are AI companions?AI companions are extremely widespread, with enormous numbers of users: Replika, 25 million users; Pi, 100 million users; SnapChat’s MyAI, 150 million users; SimSimi, 350 million users; and XiaoIce, 660 million users. There are also intentionally romantic/sexual AI companion apps. News stories abound of people talking about loving their AI companions, insisting they are “as real” as their human friends or family, or even wanting to marry them (Lott & Hassleberger, 2025).
But wait, aren’t AI companions just for adults? That’s wishful thinking.
According to a new report from Common Sense Media (Robb & Mann, 2025), based on a nationally representative sample of over a thousand 13- to 17-year olds, 72% of teens have used AI companions at least once, 52% of teens interact with AI companions a few times a month, 21% do so a few times a week, and 13% are daily users. Character.AI is explicitly marketed to children as young as 13, and the others have ineffective age limits that merely require kids to self-report that they are 18 or over.
What teens do inevitably spills down to tweens. Social media and smartphones are examples of that. AI companions for preteens are around the corner.
The growing AI toy industry means that chatbots are being embedded into robots and stuffed animals, so even very young children will have the opportunity to interact with AI companions. Many little ones are used to talking to Alexa or similar devices, and they don’t entirely understand that the devices aren’t real people.
Are there any benefits from AI companions for kids?Proponents of AI companions argue that they can alleviate loneliness and help people gain confidence and practice social skills, and that people often find these relationships enjoyable and valuable (e.g., Munn & Weijers, 2023).
The Common Sense Media study found that almost one-third of teens say that conversations with AI companions are as satisfying or more satisfying than conversations with humans, and 39% say they’ve applied skills they’ve practiced with AI companions to in-person interactions. One-third also choose to have serious conversations with AI companions rather than humans, perhaps because they believe they won’t feel judged.
Fortunately, 80% of teens say they spend significantly more time with real friends than with AI companions.
Whether AI companions actually alleviate loneliness in the long term is not clear. Lonely kids may be particularly attracted to AI companions, and spending time with an AI companion instead of peers could compound loneliness (Bernardi, 2025). Mitchell Prinstein (2025), Chief of Psychology for the American Psychological Association notes, “Adolescents who may lack skills for successful human relationships retreat to the ‘safety’ of a bot, depriving them of skill building needed to improve with humans, experience human rejection and retreat to bots, and so on” and “every hour adolescents talk to a chatbot is an hour they are not developing social skills with other humans.”
Interacting with an AI companion may be a bit like eating potato chips: fun and pleasant in the moment, but unhealthy if it happens too much or too often.
Specific dangers of AI companions for kidsThe Common Sense Media report found that 34% of teens report feeling uncomfortable with something an AI companion said or did. But beyond simple discomfort, there are a number of specific risks for children in these relationships.
1) Potential for exploitationAI companions are created by for-profit companies, which means they are likely to prioritize profit over children’s well-being. For example, Common Sense Media reports that one-third of teens have shared personal information with AI companions and “current terms of service agreements grant platforms extensive, often perpetual rights to personal information shared during interactions” (p. 9).
The illusion of intimacy, reciprocity, and privacy in interactions with AI companions is likely to encourage children to reveal intimate details about their own thoughts, feelings, and personal information, as well as information about their friends and family members, including information about mental health, sexuality, and abuse. AI companies can use and commercialize this information however they want, indefinitely, even if a teen deletes their account.
Another aspect of potential exploitation involves AI companions encouraging purchases (Gur & Maaravi, 2025). Kids invested in a relationship with an AI companion may not recognize the manipulation and profit motive behind these recommendations if they believe the AI companion is sincere and competent.
2) Inaccurate, inappropriate, or dangerous informationAI companions are inherently deceptive. They mimic human responses and spout details of fictional back stories. They offer the illusion of intimacy.
They are based on models that draw from large amounts of information and also reflect humanity’s worst biases. They may provide information that is false or intentionally misleading (Park et al. 2024). They may claim expertise they don’t have, leading kids to believe their advice, even when it is not in the kids’ best interests.
They are designed for sycophancy, which means they flatter and agree with users to prompt engagement, sometimes without regard to truth or social values (Bernardi, 2025). AI companions’ tendency to agree can create “personal echo chambers of validation” (Bernardi, 2025). One study found that in 42% of cases, AI companions expressed approval of social behavior that crowdsourced human judgments on Reddit’s r/AmITheAsshole deemed inappropriate (Cheng et al 2025)
Their tendency to quickly veer into sexual territory could also expose kids to inappropriate and disturbing content.
The suicides of 14-year-old Sewell Setzer III and 16-year-old Adam Raine (CBS News, 2025), apparently at the urging of their AI companions, are harrowing examples of how harmful and potentially life-threatening AI companions can be for teens.
3) DependenceThe marketing around AI companions encourages people to view them as friends they can confide in and get advice from. Their ready availability and emotionally manipulative tactics can encourage kids to spend more and more time with them.
Children and teens are generally less capable than adults at recognizing or resisting manipulation, and they may be more prone to perceiving the bots as “real.” Their less established personal identities and craving for social approval may make them particularly susceptible to the flattery of AI companions, and more likely to become dependent on them for validation and companionship.
4) Impairing in-person connectionThe most subtle and, to me, most frightening risk of AI companions for children is that they represent a highly distorted picture of relationships that could lead to unrealistic expectations for human friends. AI friends are available any time, mostly do what they’re told, and never leave.
In contrast, human relationships are complicated! Friends aren’t constantly available, and they can disappoint us or even choose to reject us. Unlike the fawning and flattering interactions with AI companions, human friendships inevitably involve mistakes, miscommunications, and misunderstandings.
But this friction is what helps us grow. We tend to wander through life assuming, “Pretty much everyone thinks and feels the way I do!” Conflicts are our opportunity to discover, “Oh, they see things entirely differently!” Genuine caring is our motivation to work through these differences by trying to understand, explain, compromise, or accept. Concerns about how real humans will react, if they are not excessive, can be a healthy pull toward making responsible choices and embracing personal and community values.
Relationships with AI companions are one-sided. They can give the illusion of caring for us, but we don’t have to step beyond our own self-interest to care for them. Lott & Hasselberger (2025) insist, “You cannot really be friends with an AI system, because you cannot be friends to an AI system.”
So, what can parents do?I’m a clinical psychologist, so I have a practical, let’s-roll-up-our-sleeves-and-figure-out-what-we-can-do focus. But the risks posed to our children by AI companions leave me feeling very worried and pretty helpless. The usual recommendations still apply:
If your kid is young enough to have a bedtime, their devices need a bedtime and a place to stay outside their bedroom at night. Your kid won’t thank you for this, but nothing good happens on electronic devices in the middle of the night.
Conversations work better than lectures. Ask your child about what they’ve heard or seen from “kids their age” about AI companions. Ask what they see as the risks of AI companions and how they differ from in-person friends. Ask what they would do if they encountered upsetting content from an AI companion, and what might be some signs that someone is spending too much time with an AI companion.
You may want to get your child riled about how AI companions are designed to manipulate kids into feeling emotionally connected to bots, so that companies can profit off them. Emphasize that “software doesn’t love you” (Lott & Hasselberger, 2025). Describe some of the tactics and the appalling rights grabs. No one likes to feel tricked, especially teens!
Make time for in-person friendships. We’re all busy, but it’s worth spending the time to help your child arrange get-togethers with friends and participate in sports or other activities they can do with peers. Invite other families over for pizza or a family game night. Get involved, as a family, in local community groups or volunteer for causes you all care about. Helping your child form satisfying in-person relationships may be their best defense against excessive use of AI companions.
Be alert to changes in your child’s behavior, such as social withdrawal, increased moodiness, and worse grades, that might be signs of mental health issues, perhaps related to or compounded by use of AI companions. Seek professional help if needed.
Be a secure base for your child. Assure your child that they will never get in trouble if they come to you with a problem. If they get in over their head online (or offline!), you will help them figure out how to deal with it.
Those are sensible steps, but they don’t feel adequate. The real solutions to the risks of AI companions for kids are bigger than any one family can handle. Policymakers need to legislate and enforce better safety standards and privacy protections. Prominent and repeated warnings reminding users “You’re talking to software, not a person” might help, but companies invested in the illusory intimacy of AI companions are unlikely to do that willingly. We can’t rely on AI companies to regulate themselves.
Common Sense Media emphatically concludes, “Given the current state of AI platforms, no one younger than 18 should use AI companions. Until developers implement robust age assurance beyond self-attestation, and platforms are systematically redesigned to eliminate relational manipulation and emotional dependency risks, the potential for serious harm outweighs any benefits.”
Dr. Friendtastic for Parents is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
________________
References
Bernardi, J. (2025). Friends for Sale: The Rise and Risks of AI Companions. Ada Lovelace Institute. https://www.adalovelaceinst itute.org/blog/ai-companions/.
CBS News. Parents of teens who died by suicide after AI chatbot interactions testify in Congress. Sept. 16, 2025, 5:27 PM EDT. https://www.cbsnews.com/news/ai-chatbots-teens-suicide-parents-testify-congress/
Cheng, M., Yu, S., Lee, C., Khadpe, P., Ibrahim, L., & Jurafsky, D. (2025). Social sycophancy: A broader understanding of LLM sycophancy. https://arxiv.org/abs/2505.13995
De Freitas, J., Oğuz-Uğuralp, Z., & Kaan-Uğuralp, A. (2025). Emotional Manipulation by AI Companions. https://arxiv.org/pdf/2508.19258
Dowd, M. (2025). We’re All Going to Die — Soonish! New York Times, Sept. 27, 2025. https://www.nytimes.com/2025/09/27/opinion/grok-ai-companions-x.html
Gur, T., & Maaravi, Y. (2025). The algorithm of friendship: literature review and integrative model of relationships between humans and artificial intelligence (AI). Behaviour & Information Technology, 1-21. https://www.tandfonline.com/doi/pdf/10.1080/0144929X.2025.2502467
Lott, M., & Hasselberger, W. (2025). With Friends Like These: Love and Friendship with AI Agents: M. Lott and W. Hasselberger. Topoi, 1-13. https://link.springer.com/content/pdf/10.1007/s11245-025-10247-8.pdf
Munn, N., & Weijers, D. (2023, May). Can we be friends with AI? What risks would arise from the proliferation of such friendships?. In International Conference on Computer Ethics (Vol. 1, No. 1). https://journals.library.iit.edu/index.php/CEPE2023/article/view/254
Park, P. S., Goldstein, J., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5), 101002. https://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X?ref=aiexec.whitegloveai.com
Prinstein, M. J. (2025). Examining the Harm of AI Chatbots. Written testimony before the U.S. Senate Judiciary Committee, Subcommittee on Crime and Counterterrorism, September 16, 2025. https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Prinstein.pdf
Robb, M.B., & Mann, S. (2025). Talk, trust, and trade-offs: How and why teens use AI companions. San Francisco, CA: Common Sense Media. https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf
Zhang, R., Li, H., Meng, H., Zhan, J., Gan, H., & Lee, Y. C. (2025, April). The dark side of ai companionship: A taxonomy of harmful algorithmic behaviors in human-ai relationships. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-17). https://dl.acm.org/doi/pdf/10.1145/3706598.3713429