
Sign up to save your podcasts
Or
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
4.7
417417 ratings
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
170 Listeners
476 Listeners
297 Listeners
338 Listeners
157 Listeners
205 Listeners
194 Listeners
89 Listeners
133 Listeners
208 Listeners
92 Listeners
551 Listeners
11 Listeners
31 Listeners
40 Listeners