
Sign up to save your podcasts
Or
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
4.7
416416 ratings
Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta.
We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.
The complete show notes for this episode can be found at twimlai.com/go/513.
1,060 Listeners
475 Listeners
296 Listeners
341 Listeners
149 Listeners
187 Listeners
298 Listeners
90 Listeners
425 Listeners
127 Listeners
200 Listeners
70 Listeners
509 Listeners
32 Listeners
43 Listeners