
Sign up to save your podcasts
Or


Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.
Michal's paper on Theory of Mind in LLMs: https://arxiv.org/abs/2302.02083
Michal's paper on reasoning bias in LLMs: https://www.nature.com/articles/s43588-023-00527-x
Michal's personal website: https://www.michalkosinski.com/
Xi Jia's profile: https://profiles.stanford.edu/xijia-zhou
Xi Jia's Twitter/X: https://twitter.com/LauraXijiaZhou
Podcast Twitter @StanfordPsyPod
Podcast Substack https://stanfordpsypod.substack.com/
Let us know what you thought of this episode, or of the podcast! :) [email protected]
By Stanford Psychology5
44 ratings
Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.
Michal's paper on Theory of Mind in LLMs: https://arxiv.org/abs/2302.02083
Michal's paper on reasoning bias in LLMs: https://www.nature.com/articles/s43588-023-00527-x
Michal's personal website: https://www.michalkosinski.com/
Xi Jia's profile: https://profiles.stanford.edu/xijia-zhou
Xi Jia's Twitter/X: https://twitter.com/LauraXijiaZhou
Podcast Twitter @StanfordPsyPod
Podcast Substack https://stanfordpsypod.substack.com/
Let us know what you thought of this episode, or of the podcast! :) [email protected]

43,673 Listeners

11,130 Listeners

4,275 Listeners

772 Listeners

112,105 Listeners

14,882 Listeners

5,554 Listeners

8,189 Listeners

15,663 Listeners

15,906 Listeners

2,181 Listeners

2,190 Listeners

2,944 Listeners