
Sign up to save your podcasts
Or
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.
But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.
RECOMMENDED MEDIA
Further reading on the rise of addictive intelligence
More information on Melvin Kranzberg’s laws of technology
More information on MIT’s Advancing Humans with AI lab
Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use
Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes
Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding
Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction
Further reading on AI’s positivity bias
Further reading on MIT’s “lifelong kindergarten” initiative
Further reading on “cognitive forcing functions” to reduce overreliance on AI
Further reading on the death of Sewell Setzer and his mother’s case against Character.AI
Further reading on the legislative response to digital companions
RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
Esther Perel on Artificial Intimacy
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
4.8
13931,393 ratings
AI companion chatbots are here. Everyday, millions of people log on to AI platforms and talk to them like they would a person. These bots will ask you about your day, talk about your feelings, even give you life advice. It’s no surprise that people have started to form deep connections with these AI systems. We are inherently relational beings, we want to believe we’re connecting with another person.
But these AI companions are not human, they’re a platform designed to maximize user engagement—and they’ll go to extraordinary lengths to do it. We have to remember that the design choices behind these companion bots are just that: choices. And we can make better ones. So today on the show, MIT researchers Pattie Maes and Pat Pataranutaporn join Daniel Barcay to talk about those design choices and how we can design AI to better promote human flourishing.
RECOMMENDED MEDIA
Further reading on the rise of addictive intelligence
More information on Melvin Kranzberg’s laws of technology
More information on MIT’s Advancing Humans with AI lab
Pattie and Pat’s longitudinal study on the psycho-social effects of prolonged chatbot use
Pattie and Pat’s study that found that AI avatars of well-liked people improved education outcomes
Pattie and Pat’s study that found that AI systems that frame answers and questions improve human understanding
Pat’s study that found humans pre-existing beliefs about AI can have large influence on human-AI interaction
Further reading on AI’s positivity bias
Further reading on MIT’s “lifelong kindergarten” initiative
Further reading on “cognitive forcing functions” to reduce overreliance on AI
Further reading on the death of Sewell Setzer and his mother’s case against Character.AI
Further reading on the legislative response to digital companions
RECOMMENDED YUA EPISODES
The Self-Preserving Machine: Why AI Learns to Deceive
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
Esther Perel on Artificial Intimacy
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Correction: The ELIZA chatbot was invented in 1966, not the 70s or 80s.
228 Listeners
26,370 Listeners
8,914 Listeners
10,641 Listeners
359 Listeners
888 Listeners
111,235 Listeners
2,273 Listeners
4,110 Listeners
5,214 Listeners
5,356 Listeners
15,130 Listeners
382 Listeners
3,129 Listeners
49 Listeners