
Sign up to save your podcasts
Or
Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.
Michal's paper on Theory of Mind in LLMs: https://arxiv.org/abs/2302.02083
Michal's paper on reasoning bias in LLMs: https://www.nature.com/articles/s43588-023-00527-x
Michal's personal website: https://www.michalkosinski.com/
Xi Jia's profile: https://profiles.stanford.edu/xijia-zhou
Xi Jia's Twitter/X: https://twitter.com/LauraXijiaZhou
Podcast Twitter @StanfordPsyPod
Podcast Substack https://stanfordpsypod.substack.com/
Let us know what you thought of this episode, or of the podcast! :) [email protected]
4.3
8484 ratings
Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.
Michal's paper on Theory of Mind in LLMs: https://arxiv.org/abs/2302.02083
Michal's paper on reasoning bias in LLMs: https://www.nature.com/articles/s43588-023-00527-x
Michal's personal website: https://www.michalkosinski.com/
Xi Jia's profile: https://profiles.stanford.edu/xijia-zhou
Xi Jia's Twitter/X: https://twitter.com/LauraXijiaZhou
Podcast Twitter @StanfordPsyPod
Podcast Substack https://stanfordpsypod.substack.com/
Let us know what you thought of this episode, or of the podcast! :) [email protected]
952 Listeners
81 Listeners
70 Listeners
1,138 Listeners
1,844 Listeners
43,454 Listeners
12,006 Listeners
12,526 Listeners
1,319 Listeners
1,867 Listeners
9,233 Listeners
14,427 Listeners
3,691 Listeners
1,358 Listeners
690 Listeners