
Sign up to save your podcasts
Or


There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.
By Marketplace4.5
12561,256 ratings
There’s a lot of hope that artificially intelligent chatbots could help provide sorely needed mental health support. Early research suggests humanlike responses from large language models could help fill in gaps in services. But there are risks. A recent study found that prompting ChatGPT with traumatic stories — the type a patient might tell a therapist — can induce an anxious response, which could be counterproductive. Ziv Ben-Zion, a clinical neuroscience researcher at Yale University and the University of Haifa, co-authored the study. Marketplace’s Meghan McCarty Carino asked him why AI appears to reflect or even experience the emotions that it’s exposed to.

32,249 Listeners

30,707 Listeners

8,791 Listeners

929 Listeners

1,389 Listeners

1,653 Listeners

2,177 Listeners

5,489 Listeners

113,078 Listeners

56,848 Listeners

9,562 Listeners

10,337 Listeners

3,619 Listeners

6,130 Listeners

6,578 Listeners

6,446 Listeners

163 Listeners

2,994 Listeners

154 Listeners

1,381 Listeners

91 Listeners