Authors
Zhuofan Xia, Xuran Pan, Shiji Song, Li Erran Li, Gao Huang
Publication date
2022/7/3
Conference
IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Description
Transformers have recently shown superior performances on various vision tasks. The large, sometimes even global, receptive field endows Transformer models with higher representation power over their CNN counterparts. Nevertheless, simply enlarging receptive field also gives rise to several concerns. On the one hand, using dense attention eg, in ViT, leads to excessive memory and computational cost, and features can be influenced by irrelevant parts which are beyond the region of interests. On the other hand, the sparse attention adopted in PVT or Swin Trans-former is data agnostic and may limit the ability to model long range relations. To mitigate these issues, we propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way. This flexible scheme enables the self-attention module to focus on relevant regions and cap-ture more informative features. On this basis, we present Deformable Attention Transformer, a general backbone model with deformable attention for both image classifi-cation and dense prediction tasks. Extensive experiments show that our models achieve consistently improved results on comprehensive benchmarks.
Total citations
Scholar articles
Z Xia, X Pan, S Song, LE Li, G Huang - Proceedings of the IEEE/CVF conference on computer …, 2022