The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358

03.18.2020 - By Sam CharringtonPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.

More episodes from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)