The Gradient: Perspectives on AI

Sewon Min: The Science of Natural Language


Listen Later

In episode 65 of The Gradient Podcast, Daniel Bashir speaks to Sewon Min.

Sewon is a fifth-year PhD student in the NLP group at the University of Washington, advised by Hannaneh Hajishirzi and Luke Zettlemoyer. She is a part-time visiting researcher at Meta AI and a recipient of the JP Morgan PhD Fellowship. She has previously spent time at Google Research and Salesforce research.

Have suggestions for future podcast guests (or other feedback)? Let us know here!

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (03:00) Origin Story

* (04:20) Evolution of Sewon’s interests, question-answering and practical NLP

* (07:00) Methodology concerns about benchmarks

* (07:30) Multi-hop reading comprehension

* (09:30) Do multi-hop QA benchmarks actually measure multi-hop reasoning?

* (12:00) How models can “cheat” multi-hop benchmarks

* (13:15) Explicit compositionality

* (16:05) Commonsense reasoning and background information

* (17:30) On constructing good benchmarks

* (18:40) AmbigQA and ambiguity

* (22:20) Types of ambiguity

* (24:20) Practical possibilities for models that can handle ambiguity

* (25:45) FaVIQ and fact-checking benchmarks

* (28:45) External knowledge

* (29:45) Fact verification and “complete understanding of evidence”

* (31:30) Do models do what we expect/intuit in reading comprehension?

* (34:40) Applications for fact-checking systems

* (36:40) Intro to in-context learning (ICL)

* (38:55) Example of an ICL demonstration

* (40:45) Rethinking the Role of Demonstrations and what matters for successful ICL

* (43:00) Evidence for a Bayesian inference perspective on ICL

* (45:00) ICL + gradient descent and what it means to “learn”

* (47:00) MetaICL and efficient ICL

* (49:30) Distance between tasks and MetaICL task transfer

* (53:00) Compositional tasks for language models, compositional generalization

* (55:00) The number and diversity of meta-training tasks

* (58:30) MetaICL and Bayesian inference

* (1:00:30) Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

* (1:02:00) The copying effect

* (1:03:30) Copying effect for non-identical examples

* (1:06:00) More thoughts on ICL

* (1:08:00) Understanding Chain-of-Thought Prompting

* (1:11:30) Bayes strikes again

* (1:12:30) Intro to Sewon’s text retrieval research

* (1:15:30) Dense Passage Retrieval (DPR)

* (1:18:40) Similarity in QA and retrieval

* (1:20:00) Improvements for DPR

* (1:21:50) Nonparametric Masked Language Modeling (NPM)

* (1:24:30) Difficulties in training NPM and solutions

* (1:26:45) Follow-on work

* (1:29:00) Important fundamental limitations of language models

* (1:31:30) Sewon’s experience doing a PhD

* (1:34:00) Research challenges suited for academics

* (1:35:00) Joys and difficulties of the PhD

* (1:36:30) Sewon’s advice for aspiring PhDs

* (1:38:30) Incentives in academia, production of knowledge

* (1:41:50) Outro

Links:

* Sewon’s homepage and Twitter

* Papers

* Solving and re-thinking benchmarks

* Multi-hop Reading Comprehension through Question Decomposition and Rescoring / Compositional Questions Do Not Necessitate Multi-hop Reasoning

* AmbigQA: Answering Ambiguous Open-domain Questions

* FaVIQ: FAct Verification from Information-seeking Questions

* Language Modeling

* Rethinking the Role of Demonstrations

* MetaICL: Learning to Learn In Context

* Towards Understanding CoT Prompting

* Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations

* Text representation/retrieval

* Dense Passage Retrieval

* Nonparametric Masked Language Modeling



Get full access to The Gradient at thegradientpub.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

The Gradient: Perspectives on AIBy Daniel Bashir

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

47 ratings


More shows like The Gradient: Perspectives on AI

View all
The Gray Area with Sean Illing by Vox

The Gray Area with Sean Illing

10,685 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

323 Listeners

Practical AI by Practical AI LLC

Practical AI

190 Listeners

Thoughts on the Market by Morgan Stanley

Thoughts on the Market

1,261 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

195 Listeners

Last Week in AI by Skynet Today

Last Week in AI

288 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,050 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

386 Listeners

Hard Fork by The New York Times

Hard Fork

5,422 Listeners

Raising Health by Andreessen Horowitz, a16z Bio + Health

Raising Health

146 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,228 Listeners

Unexplainable by Vox

Unexplainable

2,188 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

76 Listeners

The Ben & Marc Show by Marc Andreessen, Ben Horowitz

The Ben & Marc Show

134 Listeners