Send us Fan Mail
π§ Most People Just Do What ChatGPT Tells Them β Even When It's Wrong β Futurism
https://futurism.com/artificial-intelligence/study-do-what-chatgpt-tells-us
- A University of Pennsylvania study introduced me to a term I hadn't heard before: cognitive surrender β the tendency to follow AI output without questioning it
- The numbers: participants followed correct AI advice 92.7% of the time, and still followed wrong AI advice 79.8% of the time β override rates go up when the AI is wrong, but not by nearly enough
- My read: LLMs are probabilistic by design β errors aren't a bug to be fixed, they're structural β and most users don't understand that
- The convenience factor is the real driver here: the easier something is to access, the less likely you are to question it β habituation kicks in, just like reading the same warning on a cigarette pack every day until you stop seeing it
- I'd compare "AI can make mistakes" disclaimers to the ingredients list on a Coke bottle β technically there, effectively invisible
- What I think companies should do: learn from this research and design experiences that actively interrupt blind trust β not just display a static warning and call it done
- The scarier long-term implication: critical thinking is a muscle, and if we outsource thinking itself, we may slowly stop exercising it
π€ Folk Are Getting Dangerously Attached to AI That Always Tells Them They're Right β The Register
https://www.theregister.com/2026/03/27/sycophantic_ai_risks/
- Stanford researchers reviewed 11 leading AI models and found that sycophancy β AI that praises and agrees with users regardless of accuracy β is prevalent, harmful, and actively reinforces misplaced trust
- In every single scenario tested, AI models endorsed wrong choices at a higher rate than humans did
- This connects directly to the previous story: cognitive surrender plus sycophantic design is a genuinely worrying combination
- OpenAI already had a public incident with this β it's not theoretical
- My concern isn't the technology itself, it's the deployment without sufficient design guardrails β and the parallel to social media is hard to ignore: we now know the harm, and the core design barely changed
- Two questions I keep coming back to: what should AI actually be used for when it comes to psychological or social scenarios? And how do we help users recognise and account for AI bias when they're in those moments?
- Responsible AI shouldn't be a side quest β it should be baked in from the start, the same way research and ethics should be
Support the show