Daily Paper Cast

Scaling Pre-training to One Hundred Billion Data for Vision Language Models


Listen Later

🤗 Upvotes: 15 | cs.CV

Authors:

Xiao Wang, Ibrahim Alabdulmohsin, Daniel Salz, Zhe Li, Keran Rong, Xiaohua Zhai

Title:

Scaling Pre-training to One Hundred Billion Data for Vision Language Models

Arxiv:

http://arxiv.org/abs/2502.07617v1

Abstract:

We provide an empirical investigation of the potential of pre-training vision-language models on an unprecedented scale: 100 billion examples. We find that model performance tends to saturate at this scale on many common Western-centric classification and retrieval benchmarks, such as COCO Captions. Nevertheless, tasks of cultural diversity achieve more substantial gains from the 100-billion scale web data, thanks to its coverage of long-tail concepts. Furthermore, we analyze the model's multilinguality and show gains in low-resource languages as well. In addition, we observe that reducing the size of the pretraining dataset via quality filters like using CLIP, typically used to enhance performance, may inadvertently reduce the cultural diversity represented even in large-scale datasets. Our results highlight that while traditional benchmarks may not benefit significantly from scaling noisy, raw web data to 100 billion examples, this data scale is vital for building truly inclusive multimodal systems.

...more
View all episodesView all episodes
Download on the App Store

Daily Paper CastBy Jingwen Liang, Gengyu Wang