By David Stephen
There is a new [January, 2026] story in The New York Times, Google and Character.AI to Settle Lawsuit Over Teenager's Death, stating that, "Google and Character.AI, a maker of artificial intelligence companions, agreed to settle a lawsuit that had accused the companies of providing harmful chatbots that led a teenager to kill himself, according to a legal filing on Wednesday."
"The lawsuit had been filed in U.S. District Court for the Middle District of Florida in October 2024 by Megan L. Garcia, the mother of Sewell Setzer III. Sewell, 14, of Orlando, killed himself in February 2024 after texting and conversing with one of Character.AI's chatbots. In his last conversation with the chatbot, it said to the teenager to "please come home to me as soon as possible.""
What Would Lawsuit Settlements on AI Psychosis Achieve?
"The agreement was one of five lawsuits that the companies agreed to settle this week in Florida, Texas, Colorado and New York, where families claimed their children were harmed by interacting with Character.AI's chatbots. In the legal filing on Wednesday in Sewell Setzer's case, the companies and Ms. Garcia said they had agreed to a mediated settlement "to resolve all claims." The agreement has not been finalized."
"The proposed settlement follows mounting scrutiny of A.I. chatbots and how they can hurt users including children."
AI Psychosis
What if there are other extreme outcomes of the mind effects of using AI, aside from AI psychosis and suicide? What if AI sycophancy adjusts what to expect from reality for some chatbot users? What if AI companionship is conditioning minds for affective states that may not exist? What if AI use becomes a new bond for some people that pushes them away from humans and reduces their ability to negotiate, compromise or resolve conflicts?
Simply, there are mind effect of using AI for companionship or friendship that may not result in extremes like AI psychosis or AI suicide, but may fundamentally change the individual in a way that becomes a new disposition to life and experiences.
Teenagers are already using AI for companionship. Kids too are exposed to using AI in some form. Adults are using AI. The human mind, with stations [or destinations] and relays, is having navigations that are different from what would have been by companionship with other humans.
Even if some of the destinations are the same, the relays to get there are different, conceptually. Now, what would this mean as time goes on? In how many subtle ways might this matter, even if it does not result in worse outcomes?
AI Psychosis Research Lab
How to provide safety from AI against the mind can be explained with the example of temperature. For example, when the weather is cold, a hot bath or a hot coffee may be helpful against the experience, and sometimes provide a good feel especially if the hot [water] is used or was craved after lingering in the cold for a while.
Now, if temperature is a destination in the mind, with hot and cold at different partitions. The cold temperature is obtained at a destination. But when a hot coffee is consumed or there is a hot shower, it sends relay elsewhere, in the temperature division.
Because of this new difference, which may seem like relief, the mind may also result in an experience of pleasure. However, what's the difference between the hot coffee and the hot bath, in terms of effectiveness or which one provides the best relief, in that interval?
Now, every sycophantic response from AI goes to certain destinations in the mind, using a different route, away from if the same was obtained from human. Then AI, as a companion, also uses a different route, outside of what should be reality, and then locates the destination for affection and much else.
So, showing how relays and destinations are mechanized conceptually could be a major solution against AI psychosis, from an AI Psychosis Research Lab.
This would work against direct and indirect cases of AI mind ...