
Sign up to save your podcasts
Or


In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.
Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender
---
Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.
---
Timestamps:
0:00 Sneak peek, intro
1:03 Stochastic Parrots
9:57 The societal impact of big language models
16:49 How language models can be harmful
26:00 The important difference between linguistic form and meaning
34:40 The octopus thought experiment
42:11 Language acquisition and the future of language models
49:47 Why benchmarks are limited
54:38 Ways of complementing benchmarks
1:01:20 The #BenderRule
1:03:50 Language diversity and linguistics
1:12:49 Outro
By Lukas Biewald4.8
6868 ratings
In this episode, Emily and Lukas dive into the problems with bigger and bigger language models, the difference between form and meaning, the limits of benchmarks, and why it's important to name the languages we study.
Show notes (links to papers and transcript): http://wandb.me/gd-emily-m-bender
---
Emily M. Bender is a Professor of Linguistics at and Faculty Director of the Master's Program in Computational Linguistics at University of Washington. Her research areas include multilingual grammar engineering, variation (within and across languages), the relationship between linguistics and computational linguistics, and societal issues in NLP.
---
Timestamps:
0:00 Sneak peek, intro
1:03 Stochastic Parrots
9:57 The societal impact of big language models
16:49 How language models can be harmful
26:00 The important difference between linguistic form and meaning
34:40 The octopus thought experiment
42:11 Language acquisition and the future of language models
49:47 Why benchmarks are limited
54:38 Ways of complementing benchmarks
1:01:20 The #BenderRule
1:03:50 Language diversity and linguistics
1:12:49 Outro

537 Listeners

1,089 Listeners

302 Listeners

334 Listeners

226 Listeners

211 Listeners

95 Listeners

511 Listeners

131 Listeners

227 Listeners

610 Listeners

33 Listeners

35 Listeners

21 Listeners

40 Listeners