NLP Highlights

88 - A Structural Probe for Finding Syntax in Word Representations, with John Hewitt


Listen Later

In this episode, we invite John Hewitt to discuss his take on how to probe word embeddings for syntactic information. The basic idea is to project word embeddings to a vector space where the L2 distance between a pair of words in a sentence approximates the number of hops between them in the dependency tree. The proposed method shows that ELMo and BERT representations, trained with no syntactic supervision, embed many of the unlabeled, undirected dependency attachments between words in the same sentence.
Paper: https://nlp.stanford.edu/pubs/hewitt2019structural.pdf
GitHub repository: https://github.com/john-hewitt/structural-probes
Blog post: https://nlp.stanford.edu/~johnhew/structural-probe.html
Twitter thread: https://twitter.com/johnhewtt/status/1114252302141886464
John's homepage: https://nlp.stanford.edu/~johnhew/
...more
View all episodesView all episodes
Download on the App Store

NLP HighlightsBy Allen Institute for Artificial Intelligence

  • 4.3
  • 4.3
  • 4.3
  • 4.3
  • 4.3

4.3

23 ratings


More shows like NLP Highlights

View all
Data Skeptic by Kyle Polich

Data Skeptic

480 Listeners

Up First from NPR by NPR

Up First from NPR

56,176 Listeners