Authors
Yehao Li, Ting Yao, Yingwei Pan, Tao Mei
Publication date
2022/4/1
Journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Volume
45
Issue
2
Pages
1489-1500
Publisher
IEEE
Description
Transformer with self-attention has led to the revolutionizing of natural language processing field, and recently inspires the emergence of Transformer-style architecture design with competitive results in numerous computer vision tasks. Nevertheless, most of existing designs directly employ self-attention over a 2D feature map to obtain the attention matrix based on pairs of isolated queries and keys at each spatial location, but leave the rich contexts among neighbor keys under-exploited. In this work, we design a novel Transformer-style module, i.e., Contextual Transformer ( CoT ) block, for visual recognition. Such design fully capitalizes on the contextual information among input keys to guide the learning of dynamic attention matrix and thus strengthens the capacity of visual representation. Technically, CoT block first contextually encodes input keys via a convolution, leading to a static contextual representation of …
Total citations
2021202220232024584199137
Scholar articles
Y Li, T Yao, Y Pan, T Mei - IEEE Transactions on Pattern Analysis and Machine …, 2022