
Sign up to save your podcasts
Or
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.
But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.
RECOMMENDED MEDIA
Further reading on the rise of addictive intelligence
More information on Melvin Kranzberg’s laws of technology
More information on MIT’s Advancing Humans with AI lab
Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use
Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes
Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding
Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction
Further reading on AI’s positivity bias
Further reading on MIT’s “lifelong kindergarten” initiative
Further reading on “cognitive forcing functions” to reduce overreliance on AI
Further reading on the death of Sewell Setzer and his mother’s case against Character.AI
Further reading on the legislative response to digital companions
RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
Esther Perel on Artificial Intimacy
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
4.8
14311,431 ratings
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.
But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.
RECOMMENDED MEDIA
Further reading on the rise of addictive intelligence
More information on Melvin Kranzberg’s laws of technology
More information on MIT’s Advancing Humans with AI lab
Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use
Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes
Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding
Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction
Further reading on AI’s positivity bias
Further reading on MIT’s “lifelong kindergarten” initiative
Further reading on “cognitive forcing functions” to reduce overreliance on AI
Further reading on the death of Sewell Setzer and his mother’s case against Character.AI
Further reading on the legislative response to digital companions
RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
Esther Perel on Artificial Intimacy
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
229 Listeners
26,469 Listeners
2,395 Listeners
10,700 Listeners
357 Listeners
105 Listeners
894 Listeners
2,321 Listeners
6,751 Listeners
417 Listeners
5,448 Listeners
15,237 Listeners
8,721 Listeners
392 Listeners
91 Listeners