Yannic Kilcher Videos (Audio Only)

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding&Generation


Listen Later

#blip #review #ai


Cross-modal pre-training has been all the rage lately in deep learning, especially training vision and language models together. However, there are a number of issues, such as low quality datasets that limit the performance of any model trained on it, and also the fact that pure contrastive pre-training cannot be easily fine-tuned for most downstream tasks. BLIP unifies different tasks and objectives in a single pre-training run and achieves a much more versatile model, which the paper immediately uses to create, filter, clean and thus bootstrap its own dataset to improve performance even more!


Sponsor: Zeta Alpha

https://zeta-alpha.com

Use code YANNIC for 20% off!


OUTLINE:

0:00 - Intro

0:50 - Sponsor: Zeta Alpha

3:40 - Paper Overview

6:40 - Vision-Language Pre-Training

11:15 - Contributions of the paper

14:30 - Model architecture: many parts for many tasks

19:50 - How data flows in the model

26:50 - Parameter sharing between the modules

29:45 - Captioning & Filtering bootstrapping

41:10 - Fine-tuning the model for downstream tasks


Paper: https://arxiv.org/abs/2201.12086

Code: https://github.com/salesforce/BLIP

Demo: https://huggingface.co/spaces/Salesfo...


Abstract:

Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to video-language tasks in a zero-shot manner. Code, models, and datasets are released at this https URL.


Authors: Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi


Links:

TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick

YouTube: https://www.youtube.com/c/yannickilcher

Twitter: https://twitter.com/ykilcher

Discord: https://discord.gg/4H8xxDF

BitChute: https://www.bitchute.com/channel/yann...

LinkedIn: https://www.linkedin.com/in/ykilcher

BiliBili: https://space.bilibili.com/2017636191


If you want to support me, the best thing to do is to share out the content :)


If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):

SubscribeStar: https://www.subscribestar.com/yannick...

Patreon: https://www.patreon.com/yannickilcher

Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq

Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2

Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m

Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n


...more
View all episodesView all episodes
Download on the App Store

Yannic Kilcher Videos (Audio Only)By Yannic Kilcher

  • 5
  • 5
  • 5
  • 5
  • 5

5

1 ratings