Papers Read on AI

MixFormer: End-to-End Tracking with Iterative Mixed Attention


Listen Later

Tracking often uses a multi-stage pipeline of feature extraction, target information integration, and bounding box estimation. To simplify this pipeline and unify the process of feature extraction and target information integration, we present a compact tracking framework, termed as MixFormer, built upon transformers. Our core design is to utilize the flexibility of attention operations, and propose a Mixed Attention Module (MAM) for simultaneous feature extraction and target information integration.
2022: Yutao Cui, Jiang Cheng, Limin Wang, Gangshan Wu
Ranked #1 on Visual Object Tracking on GOT-10k
https://arxiv.org/pdf/2203.11082v1.pdf
...more
View all episodesView all episodes
Download on the App Store

Papers Read on AIBy Rob

  • 3.7
  • 3.7
  • 3.7
  • 3.7
  • 3.7

3.7

3 ratings