
Sign up to save your podcasts
Or
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
4.7
417417 ratings
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
1,087 Listeners
169 Listeners
301 Listeners
340 Listeners
145 Listeners
155 Listeners
268 Listeners
211 Listeners
480 Listeners
133 Listeners
151 Listeners
209 Listeners
556 Listeners
11 Listeners
41 Listeners