
Sign up to save your podcasts
Or


Weaviate Podcast #14. Thanks for watching the Weaviate podcast! Our 14th episode welcomes Yi-Lin Sung, Jaemin Cho, and Professor Mohit Bansal, a research team from UNC! Our guests present their work on VL Adapter, a technique to achieve full fine-tuning performance while only updating 4% of original parameters!! This is an incredibly interesting finding for the sake of cost-effective tuning of Vision and Language models based on CLIP. We additionally discussed topics around compression bottlenecks in neural architectures, V&L datasets, and the tricky question of compositional generalization. If you are curious about using CLIP in Weaviate, please check out this text-to-image search example with Unsplash images and a React frontend!
By Weaviate4
44 ratings
Weaviate Podcast #14. Thanks for watching the Weaviate podcast! Our 14th episode welcomes Yi-Lin Sung, Jaemin Cho, and Professor Mohit Bansal, a research team from UNC! Our guests present their work on VL Adapter, a technique to achieve full fine-tuning performance while only updating 4% of original parameters!! This is an incredibly interesting finding for the sake of cost-effective tuning of Vision and Language models based on CLIP. We additionally discussed topics around compression bottlenecks in neural architectures, V&L datasets, and the tricky question of compositional generalization. If you are curious about using CLIP in Weaviate, please check out this text-to-image search example with Unsplash images and a React frontend!

205 Listeners

204 Listeners

9,955 Listeners

516 Listeners

130 Listeners

91 Listeners

9 Listeners

52 Listeners