
Sign up to save your podcasts
Or
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
4.5
12361,236 ratings
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
1,642 Listeners
901 Listeners
8,629 Listeners
30,723 Listeners
1,365 Listeners
32,081 Listeners
1,013 Listeners
2,172 Listeners
5,486 Listeners
1,458 Listeners
9,503 Listeners
10,135 Listeners
3,581 Listeners
5,953 Listeners
6,206 Listeners
163 Listeners
2,762 Listeners
1,342 Listeners
90 Listeners