
Sign up to save your podcasts
Or


Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.
By Sam Charrington4.7
422422 ratings
Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.

1,105 Listeners

168 Listeners

305 Listeners

343 Listeners

233 Listeners

209 Listeners

205 Listeners

314 Listeners

100 Listeners

551 Listeners

146 Listeners

102 Listeners

228 Listeners

685 Listeners

34 Listeners