
Sign up to save your podcasts
Or
In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.
Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender
---
Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.
---
Timestamps:
0:00 Sneak peek, intro
1:03 Stochastic Parrots
9:57 The societal impact of big language models
16:49 How language models can be harmful
26:00 The important difference between linguistic form and meaning
34:40 The octopus thought experiment
42:11 Language acquisition and the future of language models
49:47 Why benchmarks are limited
54:38 Ways of complementing benchmarks
1:01:20 The #BenderRule
1:03:50 Language diversity and linguistics
1:12:49 Outro
4.8
6666 ratings
In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.
Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender
---
Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.
---
Timestamps:
0:00 Sneak peek, intro
1:03 Stochastic Parrots
9:57 The societal impact of big language models
16:49 How language models can be harmful
26:00 The important difference between linguistic form and meaning
34:40 The octopus thought experiment
42:11 Language acquisition and the future of language models
49:47 Why benchmarks are limited
54:38 Ways of complementing benchmarks
1:01:20 The #BenderRule
1:03:50 Language diversity and linguistics
1:12:49 Outro
1,060 Listeners
439 Listeners
296 Listeners
339 Listeners
188 Listeners
194 Listeners
298 Listeners
91 Listeners
423 Listeners
124 Listeners
200 Listeners
71 Listeners
508 Listeners
32 Listeners
43 Listeners