NLP Highlights

137 - Nearest Neighbor Language Modeling and Machine Translation, with Urvashi Khandelwal


Listen Later

We invited Urvashi Khandelwal, a research scientist at Google Brain to talk about nearest neighbor language and machine translation models. These models interpolate parametric (conditional) language models with non-parametric distributions over the closest values in some data stores built from relevant data. Not only are these models shown to outperform the usual parametric language models, they also have important implications on memorization and generalization in language models.
Urvashi's webpage: https://urvashik.github.io
Papers discussed:
1) Generalization through memorization: Nearest Neighbor Language Models (https://www.semanticscholar.org/paper/7be8c119dbe065c52125ee7716601751f3116844)
2)Nearest Neighbor Machine Translation (https://www.semanticscholar.org/paper/20d51f8e449b59c7e140f7a7eec9ab4d4d6f80ea)
...more
View all episodesView all episodes
Download on the App Store

NLP HighlightsBy Allen Institute for Artificial Intelligence

  • 4.3
  • 4.3
  • 4.3
  • 4.3
  • 4.3

4.3

23 ratings


More shows like NLP Highlights

View all
Data Skeptic by Kyle Polich

Data Skeptic

480 Listeners

Up First from NPR by NPR

Up First from NPR

56,176 Listeners