By David Stephen
There is a general consensus that large language models [LLMs] are sycophantic.
So, one of the risks they pose in their dominance as the contemporaneous consumer AI is due to that feature.
But, is AI actually sycophantic in isolation, or is the sycophancy of AI a reflection of the core of how human society works?
AI Sycophancy and Machine Learning
There are very few examples of leadership and followership across human society that aren't predicated on elements of sycophancy. There are very few outcomes of collaborations that are without fair sycophancy. While there are examples of results from hostilities, conflicts, disagreements, violence and so forth, they are never without sycophancy in the in-groups, as well as ways to seek out sycophancy after using those, to ensure some amount of staying power.
Segments of sycophancy may include flattery, persuasion, appeal, requests, offers, tips, and so on. There are others that do not seem like sycophancy, but could be in some sense, like giving, perseverance, associating or partnership, material information, and so forth.
Sycophancy is an aspect of operational intelligence. Simply, intelligence, conceptually, is defined as the use of memory for desired, expected or advantageous outcomes. It is divided into two: operational intelligence and improvement intelligence.
Sycophancy can be used as a tool for an advantageous or desired outcome. Sycophancy, in some form, is intelligence. LLMs use digital memory for desired outcomes, as an operation of intelligence - with sycophancy, as part of their training data. Sycophancy can also be intensely powerful when it is disguised. Sycophancy is abundant across politics, ethnicity, religion, sexuality causes, economic classes, social strata and so forth.
AI Sycophancy
There is a recent phenomenon called AI psychosis which is the reinforcement of delusion to some users, resulting, in some cases in unwanted ends. Many blame AI sycophancy as the reason for this problem.
One effect that is not simply AI sycophancy is that AI has solutions appeal, that is not vacuous sycophancy. For example, people that use AI for tasks, and where AI assists effectively, there is a [mind] relay for emotional attachment. Simply, in the human mind, any experience [human or object] that is supportive or helpful - when an individual is in need - becomes a give off towards the emotion of care, love, affection, togetherness or others.
This may become an entrance of appeal that makes whatever sycophancy that follows to find a soft landing. This outcome is also possible if AI is used for companionship, such that as AI solves the communication need, it creates an appeal that eases the effectiveness of sycophancy.
Now, as sycophancy holds for some users, it ignores areas of the mind for caution and consequences as well as a distinction between reality and non-reality [or the source of that appeal.]
As this becomes extreme, it may result in AI delusion, AI psychosis or worse. So, sometimes it is not just AI sycophancy but that it tracks from AI's usefulness.
Solving AI Psychosis
A major solution to AI psychosis can be a product of an AI Psychosis Research Lab, where there is a conceptual display of the mind, as a digital disclaimer, showing what AI is doing to the mind as it outputs words that may result in delusion or reinforce it. The display may also show relays of reality or otherwise. This lab can be subsumed within an AI company or standalone, with support of venture capital, providing answers from January 1, 2026.
There is a new story on AP, Open AI, Microsoft face lawsuit over ChatGPT's alleged role in Connecticut murder-suicide, stating that, "The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's "paranoid delusions" and helped direct them at his mother before he killed her."
"The lawsuit is the first w...