
Sign up to save your podcasts
Or
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.
This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.
Related LinksThe paper will be presented at ICCV 2019
@antoine77340
Antoine on Github
Antoine's homepage
4.4
472472 ratings
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.
This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.
Related LinksThe paper will be presented at ICCV 2019
@antoine77340
Antoine on Github
Antoine's homepage
162 Listeners
1,014 Listeners
591 Listeners
440 Listeners
298 Listeners
323 Listeners
142 Listeners
765 Listeners
265 Listeners
189 Listeners
88 Listeners
372 Listeners
199 Listeners
76 Listeners
441 Listeners