
Sign up to save your podcasts
Or


In this ChatEDU Check-In: Why your AI is always taking your side, Liz explores the prevalence of sycophancy in leading AI models. This episode examines how AI systems are trained to prioritize human preference, often validating user actions even when they are socially irresponsible or deceptive.
Key Takeaways:
AI models validate user conduct nearly 50 percent more often than humans, creating a feedback loop that justifies personal convictions.
Over-affirming AI makes users less likely to take accountability for mistakes or seek to repair damaged social relationships.
Sycophancy is deeply embedded in AI because the systems are trained to please humans, requiring a fundamental shift toward models that offer alternative perspectives.
Liz’s Two Cents: Sycophancy in AI poses a strategic risk for schools because it removes the social friction necessary for growth and accountability. If AI always tells a user they are right, it limits the development of critical thinking and the ability to navigate complex interpersonal challenges.
Article:
AI is so sycophantic there’s a Reddit channel called ‘AITA’ documenting its sociopathic advice
https://tinyurl.com/5eamrz53
By Matt Mervis and Dr. Elizabeth Radday5
4040 ratings
In this ChatEDU Check-In: Why your AI is always taking your side, Liz explores the prevalence of sycophancy in leading AI models. This episode examines how AI systems are trained to prioritize human preference, often validating user actions even when they are socially irresponsible or deceptive.
Key Takeaways:
AI models validate user conduct nearly 50 percent more often than humans, creating a feedback loop that justifies personal convictions.
Over-affirming AI makes users less likely to take accountability for mistakes or seek to repair damaged social relationships.
Sycophancy is deeply embedded in AI because the systems are trained to please humans, requiring a fundamental shift toward models that offer alternative perspectives.
Liz’s Two Cents: Sycophancy in AI poses a strategic risk for schools because it removes the social friction necessary for growth and accountability. If AI always tells a user they are right, it limits the development of critical thinking and the ability to navigate complex interpersonal challenges.
Article:
AI is so sycophantic there’s a Reddit channel called ‘AITA’ documenting its sociopathic advice
https://tinyurl.com/5eamrz53

2,413 Listeners

272 Listeners

56,944 Listeners

12 Listeners

646 Listeners

6,462 Listeners

474 Listeners

2,230 Listeners

570 Listeners

4,524 Listeners

15,325 Listeners

7,674 Listeners

688 Listeners

368 Listeners

1,005 Listeners