
Sign up to save your podcasts
Or
Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.
Michal's paper on Theory of Mind in LLMs: https://arxiv.org/abs/2302.02083
Michal's paper on reasoning bias in LLMs: https://www.nature.com/articles/s43588-023-00527-x
Michal's personal website: https://www.michalkosinski.com/
Xi Jia's profile: https://profiles.stanford.edu/xijia-zhou
Xi Jia's Twitter/X: https://twitter.com/LauraXijiaZhou
Podcast Twitter @StanfordPsyPod
Podcast Substack https://stanfordpsypod.substack.com/
Let us know what you thought of this episode, or of the podcast! :) [email protected]
5
44 ratings
Xi Jia chats with Dr. Michal Kosinski, an Associate Professor of Organizational Behavior at Stanford University's Graduate School of Business. Michal's research interests recently encompass both human and artificial cognition. Currently, his work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), Big Data, and computational techniques to model and predict human behavior.
In this episode, they chat about Michal's recent works: "Theory of Mind Might Have Spontaneously Emerged in Large Language Models" and "Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT". Michal also shared his scientific journey and some personal suggestions for PhD students.
If you found this episode interesting at all, subscribe on our Substack and consider leaving us a good rating! It just takes a second but will allow us to reach more people and make them excited about psychology.
Michal's paper on Theory of Mind in LLMs: https://arxiv.org/abs/2302.02083
Michal's paper on reasoning bias in LLMs: https://www.nature.com/articles/s43588-023-00527-x
Michal's personal website: https://www.michalkosinski.com/
Xi Jia's profile: https://profiles.stanford.edu/xijia-zhou
Xi Jia's Twitter/X: https://twitter.com/LauraXijiaZhou
Podcast Twitter @StanfordPsyPod
Podcast Substack https://stanfordpsypod.substack.com/
Let us know what you thought of this episode, or of the podcast! :) [email protected]
77,379 Listeners
28,912 Listeners
22,021 Listeners
14,867 Listeners
799 Listeners
43,343 Listeners
14,855 Listeners
14,075 Listeners
1,295 Listeners
4,095 Listeners
675 Listeners
14,670 Listeners
620 Listeners
1,882 Listeners
9,624 Listeners