
Sign up to save your podcasts
Or


There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
By Marketplace4.5
12561,256 ratings
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.

32,231 Listeners

30,685 Listeners

8,798 Listeners

933 Listeners

1,384 Listeners

1,651 Listeners

2,178 Listeners

5,486 Listeners

113,504 Listeners

56,983 Listeners

9,556 Listeners

10,332 Listeners

3,620 Listeners

6,108 Listeners

6,589 Listeners

6,465 Listeners

163 Listeners

2,992 Listeners

154 Listeners

1,387 Listeners

91 Listeners