
Sign up to save your podcasts
Or


Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
By Sam Charrington4.7
422422 ratings
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.

1,106 Listeners

168 Listeners

306 Listeners

345 Listeners

232 Listeners

209 Listeners

204 Listeners

313 Listeners

100 Listeners

553 Listeners

147 Listeners

103 Listeners

229 Listeners

689 Listeners

34 Listeners