NLP Highlights

125 - VQA for Real Users, with Danna Gurari

05.04.2021 - By Allen Institute for Artificial IntelligencePlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

How can we build Visual Question Answering systems for real users? For this episode, we chatted with Danna Gurari, about her work in building datasets and models towards VQA for people who are blind. We talked about the differences between the existing datasets, and Vizwiz, a dataset built by Gurari et al., and the resulting algorithmic changes. We also discussed the unsolved challenges in this field, and the new tasks they result in.

Danna Gurari is an Assistant Professor as well as Founding Director of the Image and Video Computing group in the School of Information at University of Texas at Austin (UT-Austin).

Vizwiz project page: https://vizwiz.org/

The hosts for this episode are Ana Marasović and Pradeep Dasigi.

More episodes from NLP Highlights