Authors
Xiang Long, Chuang Gan, Gerard De Melo, Jiajun Wu, Xiao Liu, Shilei Wen
Publication date
2018
Conference
Proceedings of the IEEE conference on computer vision and pattern recognition
Pages
7834-7843
Description
Recently, substantial research effort has focused on how to apply CNNs or RNNs to better capture temporal patterns in videos, so as to improve the accuracy of video classification. In this paper, however, we show that temporal information, especially longer-term patterns, may not be necessary to achieve competitive results on common trimmed video classification datasets. We investigate the potential of a purely attention based local feature integration. Accounting for the characteristics of such features in video classification, we propose a local feature integration framework based on attention clusters, and introduce a shifting operation to capture more diverse signals. We carefully analyze and compare the effect of different attention mechanisms, cluster sizes, and the use of the shifting operation, and also investigate the combination of attention clusters for multimodal integration. We demonstrate the effectiveness of our framework on three real-world video classification datasets. Our model achieves competitive results across all of these. In particular, on the large-scale Kinetics dataset, our framework obtains an excellent single model accuracy of 79.4% in terms of the top-1 and 94.0% in terms of the top-5 accuracy on the validation set.
Total citations
20172018201920202021202220232024111456654433617
Scholar articles
X Long, C Gan, G De Melo, J Wu, X Liu, S Wen - Proceedings of the IEEE conference on computer …, 2018