
Sign up to save your podcasts
Or


ViSMaP, a novel unsupervised system designed for summarizing hour-long videos, addressing the challenge of limited annotated data for such content. ViSMaP utilizes a "Meta-Prompting" strategy involving three Large Language Models (LLMs) that iteratively generate, evaluate, and refine "pseudo-summaries" for long videos. These LLM-generated pseudo-summaries serve as training data, bypassing the need for costly manual annotations. The system reportedly achieves performance comparable to supervised methods and demonstrates strong generalization across different video types. This approach aims to make developing solutions for understanding lengthy videos more accessible and scalable.
By Benjamin Alloul πͺ π
½π
Ύππ
΄π
±π
Ύπ
Ύπ
Ίπ
»π
ΌViSMaP, a novel unsupervised system designed for summarizing hour-long videos, addressing the challenge of limited annotated data for such content. ViSMaP utilizes a "Meta-Prompting" strategy involving three Large Language Models (LLMs) that iteratively generate, evaluate, and refine "pseudo-summaries" for long videos. These LLM-generated pseudo-summaries serve as training data, bypassing the need for costly manual annotations. The system reportedly achieves performance comparable to supervised methods and demonstrates strong generalization across different video types. This approach aims to make developing solutions for understanding lengthy videos more accessible and scalable.