
Sign up to save your podcasts
Or


Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.
This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.
Related LinksThe paper will be presented at ICCV 2019
@antoine77340
Antoine on Github
Antoine's homepage
By Kyle Polich4.4
475475 ratings
Video annotation is an expensive and time-consuming process. As a consequence, the available video datasets are useful but small. The availability of machine transcribed explainer videos offers a unique opportunity to rapidly develop a useful, if dirty, corpus of videos that are "self annotating", as hosts explain the actions they are taking on the screen.
This episode is a discussion of the HowTo100m dataset - a project which has assembled a video corpus of 136M video clips with captions covering 23k activities.
Related LinksThe paper will be presented at ICCV 2019
@antoine77340
Antoine on Github
Antoine's homepage

290 Listeners

622 Listeners

584 Listeners

301 Listeners

333 Listeners

228 Listeners

206 Listeners

203 Listeners

306 Listeners

96 Listeners

519 Listeners

261 Listeners

132 Listeners

228 Listeners

617 Listeners