
Sign up to save your podcasts
Or


This week’s guests are Steven Feng, Graduate Student and Ed Hovy, Research Professor, both from the Language Technologies Institute of Carnegie Mellon University. We discussed their recent survey paper on Data Augmentation Approaches in NLP (GitHub), an active field of research on techniques for increasing the diversity of training examples without explicitly collecting new data. One key reason why such strategies are important is that augmented data can act as a regularizer to reduce overfitting when training models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.
By Ben Lorica4.6
3636 ratings
This week’s guests are Steven Feng, Graduate Student and Ed Hovy, Research Professor, both from the Language Technologies Institute of Carnegie Mellon University. We discussed their recent survey paper on Data Augmentation Approaches in NLP (GitHub), an active field of research on techniques for increasing the diversity of training examples without explicitly collecting new data. One key reason why such strategies are important is that augmented data can act as a regularizer to reduce overfitting when training models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
Detailed show notes can be found on The Data Exchange web site.
Subscribe to The Gradient Flow Newsletter.

478 Listeners

1,099 Listeners

303 Listeners

236 Listeners

268 Listeners

214 Listeners

90 Listeners

507 Listeners

132 Listeners

25 Listeners

38 Listeners

60 Listeners

35 Listeners

22 Listeners

40 Listeners