
Sign up to save your podcasts
Or


In this Decoding Academia episode, we take a look at a 2025 paper by Daria Ovsyannikova, Victoria Olden, and Mickey Inzlicht, asking a question that might make some people uncomfortable/angry, specifically, are AI-generated responses perceived as more empathetic than those written by actual humans?
We walk through the design in detail (including why this is a genuinely severe test), hand out deserved open-science brownie points, and discuss why AI seems to excel particularly when responding to negative or distress-laden prompts. Along the way, Chris reflects on his unsettlingly intense relationship with Google’s semi-sentient customer-service agent “Bubbles,” and we ask whether infinite patience, maximal effort, and zero social awkwardness might be doing most of the work here.
This is not a paper about replacing therapists, outsourcing friendship, or mass-producing compassion at scale. It is a careful demonstration that fluent, effortful, emotionally calibrated text is often enough to convince people they are being understood, which might explain some of the appeal of the Gurus.
Source
Ovsyannikova, D., de Mello, V. O., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3(1), 4.
Decoding Academia 34: Empathetic AIs?
01:40 Introducing the Paper
10:29 Study Methodology
14:21 Chris's meaningful relationship with YouTube AI agent Bubbles
16:23 Open Science Brownie Points
17:50 Empathetic Prompt Engineering: Humans and AIs
21:17 Study 1 and 2
31:35 Study 3 and 4
37:00 Study Conclusions
42:27 Severe Hypothesis Testing
45:11 Seeking out Disconfirming Evidence
47:06 Why do AIs do better on negative prompts?
54:48 Final Thoughts
By Christopher Kavanagh and Matthew Browne4.2
933933 ratings
In this Decoding Academia episode, we take a look at a 2025 paper by Daria Ovsyannikova, Victoria Olden, and Mickey Inzlicht, asking a question that might make some people uncomfortable/angry, specifically, are AI-generated responses perceived as more empathetic than those written by actual humans?
We walk through the design in detail (including why this is a genuinely severe test), hand out deserved open-science brownie points, and discuss why AI seems to excel particularly when responding to negative or distress-laden prompts. Along the way, Chris reflects on his unsettlingly intense relationship with Google’s semi-sentient customer-service agent “Bubbles,” and we ask whether infinite patience, maximal effort, and zero social awkwardness might be doing most of the work here.
This is not a paper about replacing therapists, outsourcing friendship, or mass-producing compassion at scale. It is a careful demonstration that fluent, effortful, emotionally calibrated text is often enough to convince people they are being understood, which might explain some of the appeal of the Gurus.
Source
Ovsyannikova, D., de Mello, V. O., & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3(1), 4.
Decoding Academia 34: Empathetic AIs?
01:40 Introducing the Paper
10:29 Study Methodology
14:21 Chris's meaningful relationship with YouTube AI agent Bubbles
16:23 Open Science Brownie Points
17:50 Empathetic Prompt Engineering: Humans and AIs
21:17 Study 1 and 2
31:35 Study 3 and 4
37:00 Study Conclusions
42:27 Severe Hypothesis Testing
45:11 Seeking out Disconfirming Evidence
47:06 Why do AIs do better on negative prompts?
54:48 Final Thoughts

2,675 Listeners

26,399 Listeners

845 Listeners

2,888 Listeners

945 Listeners

4,174 Listeners

4,309 Listeners

1,663 Listeners

2,077 Listeners

3,834 Listeners

2,085 Listeners

807 Listeners

824 Listeners

599 Listeners

782 Listeners