
Sign up to save your podcasts
Or


There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
By Marketplace4.4
7777 ratings
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.

30,635 Listeners

8,799 Listeners

937 Listeners

1,387 Listeners

1,287 Listeners

3,237 Listeners

1,717 Listeners

9,737 Listeners

1,649 Listeners

5,485 Listeners

113,357 Listeners

1,450 Listeners

9,556 Listeners

10 Listeners

35 Listeners

5,596 Listeners

16,510 Listeners