
Sign up to save your podcasts
Or


There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
By Marketplace4.5
12471,247 ratings
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.

32,067 Listeners

30,782 Listeners

8,758 Listeners

927 Listeners

1,388 Listeners

1,706 Listeners

4,338 Listeners

2,179 Listeners

5,492 Listeners

56,649 Listeners

1,448 Listeners

9,543 Listeners

3,582 Listeners

6,443 Listeners

6,402 Listeners

163 Listeners

2,998 Listeners

5,522 Listeners

1,376 Listeners

90 Listeners