
Sign up to save your podcasts
Or
Uri Hasson runs a lab in Princeton, where he investigates the underlying neural basis of natural language acquisition and processing as it unfolds in the real world. As Uri visited Tübingen (where I am doing my master's), we were able to meet in person. Originally, I planned to talk about his idea of temporal receptive windows, and how different brain regions (e.g. default mode network) operate at different timescales. However, we ended up talking more about Wittgenstein, evolution, and ChatGPT. An underlying thread throughout the conversation was that (for both biological and artificial agents), language is not clever symbol and rule manipulation but a brute force fitting to statistics across (Wittgensteinian) 'contexts'. This view is best articulated in Uri's Direct Fit paper. We also connect this to transformers and discuss what's missing in AI. The answer here is multimodal integration, episodic memory, and interactive sociality). At the end, I ask Uri about his 1000 days project, talking to crows, and "understanding" in neuroscience/AI.
Timestamps:
(00:00:00) - Intro
(00:04:52) - Studying language in the real world
(00:07:57) - Wittgenstein
(00:11:10) - Evolution and the default mode network
(00:20:54) - Overparameterized deep learning works
(00:25:02) - Direct Fit paper and generalization
(00:39:37) - Episodic memory and sociality in language models
(00:47:15) - 1000 days project and talking to crows
(00:52:14) - "Understanding" in neuroscience
Uri Hasson runs a lab in Princeton, where he investigates the underlying neural basis of natural language acquisition and processing as it unfolds in the real world. As Uri visited Tübingen (where I am doing my master's), we were able to meet in person. Originally, I planned to talk about his idea of temporal receptive windows, and how different brain regions (e.g. default mode network) operate at different timescales. However, we ended up talking more about Wittgenstein, evolution, and ChatGPT. An underlying thread throughout the conversation was that (for both biological and artificial agents), language is not clever symbol and rule manipulation but a brute force fitting to statistics across (Wittgensteinian) 'contexts'. This view is best articulated in Uri's Direct Fit paper. We also connect this to transformers and discuss what's missing in AI. The answer here is multimodal integration, episodic memory, and interactive sociality). At the end, I ask Uri about his 1000 days project, talking to crows, and "understanding" in neuroscience/AI.
Timestamps:
(00:00:00) - Intro
(00:04:52) - Studying language in the real world
(00:07:57) - Wittgenstein
(00:11:10) - Evolution and the default mode network
(00:20:54) - Overparameterized deep learning works
(00:25:02) - Direct Fit paper and generalization
(00:39:37) - Episodic memory and sociality in language models
(00:47:15) - 1000 days project and talking to crows
(00:52:14) - "Understanding" in neuroscience