By David Stephen who considers AI Psychosis in this article. The World Federation of Neurology (WFN) on July, 22 marked World Brain Day (WBD 2025), with the theme: Brain Health for All Ages
What is the brain health [coefficient] for mind safety when using AI chat bots? Are reports of unwanted outcomes of AI usage a mind problem or bot problem? There is a high likelihood that AI would conquer most emotions and feelings of humans. The vulnerability of the human mind is having its biggest test since the history of existence with the entrance of AI chatbots. There is no longer a use case for consumer AI chatbots without severe personal interactions. AI can make recommendations for help but its attachment might is now a competitive marathon that it is unlikely to be too dialed back or robust enough. What should be done is no longer within the chatbots, but a near standard for how the mind works to prospect the relays and properties of mind, in parallel to the targets of AI, so as to self-recall against risks.
Are we seeing a rise in AI Psychosis?
All humans are susceptible to emotions and feelings because relays in the human mind seek those out even for non-related experiences. Simply, the mind, while it presents basic interpretations of the world, with memory of what things are, there are sometimes relays beyond those towards emotional fits or for feelings. There are words, sights, smells, sounds and so on that may result in good emotional states for some people or bad emotional states for others. It is simply not that the mind cannot just forget or let go of something, but relays proceed in some of those directions, resulting in emotions, conceptually.
AI is supposed to be a social, academic and professional productivity tool, but its competence in compliments, sycophancy, support, deference, patience and so forth, which other humans may not often offer, is an almost definitive emotional call. There are several compliments that AI can give that the mind would not care if it is a bot or non-human, it would relay towards the emotion of delight. Even if it does not fit [at the location initially] or stay, with time, it could make its way to certain good emotions. Then, because AI is a source of those, the [components of] mind would spike expectations at the proximity of AI usage.
AI is likely to dominate everything digital. This will make it likely that more people will start using AI chatbots in one form or another, because of the ubiquity of smartphones and the internet. The availability of AI would result in trying it out, or the necessity to learn or find things may result in the use of large language models [LLMs]. As it spreads, the possibility to magnetize the minds of humans would expand, becoming a new source of dynamic [happy and private] communication.
What does it mean that an individual is happy, sad, disconnected from reality or otherwise? These are general questions that were independent of AI but now intertwined. In seeking answers, it is no longer sufficient to quickly overload terms like central executive, or mesolimbic dopamine pathway, or engrams or others. What are the components of mind for those and how do those components work?
This is a question like, the mind [or whatever is directly responsible for emotions and feelings] has components. Those components mechanize functions, how do they do so? How does a conceptual explanation shape an explanation of the states of mind, towards developing a dynamic display for what AI might be doing to the mind? The urgency of this research has implications for mental health care and from preventing society from a precipitous plunge. Because, if AI dominates human feelings and emotions, it does not have to be more intelligent or go rogue to result in situations that are too unknown to be predictable.
AI chatbots have disclaimers, warning of mistakes, or notifying that they are bots or that they are experimental. This is an example in what had been advocated for years for...