Papers Read on AI

Vision Transformer with Deformable Attention


Listen Later

We propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way. This flexible scheme enables the self-attention module to focus on relevant regions and capture more informative features.
2022: Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, Gao Huang
Ranked #1 on Object Detection on COCO test-dev (AP metric)
https://arxiv.org/pdf/2201.00520v1.pdf
...more
View all episodesView all episodes
Download on the App Store

Papers Read on AIBy Rob

  • 3.7
  • 3.7
  • 3.7
  • 3.7
  • 3.7

3.7

3 ratings