
Sign up to save your podcasts
Or
This week’s guests are Steven Feng, Graduate Student and Ed Hovy, Research Professor, both from the Language Technologies Institute of Carnegie Mellon University. We discussed their recent survey paper on Data Augmentation Approaches in NLP (GitHub), an active field of research on techniques for increasing the diversity of training examples without explicitly collecting new data. One key reason why such strategies are important is that augmented data can act as a regularizer to reduce overfitting when training models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.
4.7
3737 ratings
This week’s guests are Steven Feng, Graduate Student and Ed Hovy, Research Professor, both from the Language Technologies Institute of Carnegie Mellon University. We discussed their recent survey paper on Data Augmentation Approaches in NLP (GitHub), an active field of research on techniques for increasing the diversity of training examples without explicitly collecting new data. One key reason why such strategies are important is that augmented data can act as a regularizer to reduce overfitting when training models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.
1,270 Listeners
445 Listeners
297 Listeners
323 Listeners
142 Listeners
339 Listeners
216 Listeners
267 Listeners
190 Listeners
86 Listeners
123 Listeners
75 Listeners
31 Listeners
23 Listeners
40 Listeners