Authors
Zhangyong Tang, Tianyang Xu, Xiaojun Wu, Xue-Feng Zhu, Josef Kittler
Publication date
2024/3/24
Journal
Proceedings of the AAAI Conference on Artificial Intelligence
Volume
38
Issue
6
Pages
5189-5197
Description
Generative models (GMs) have received increasing research interest for their remarkable capacity to achieve comprehensive understanding. However, their potential application in the domain of multi-modal tracking has remained unexplored. In this context, we seek to uncover the potential of harnessing generative techniques to address the critical challenge, information fusion, in multi-modal tracking. In this paper, we delve into two prominent GM techniques, namely, Conditional Generative Adversarial Networks (CGANs) and Diffusion Models (DMs). Different from the standard fusion process where the features from each modality are directly fed into the fusion block, we combine these multi-modal features with random noise in the GM framework, effectively transforming the original training samples into harder instances. This design excels at extracting discriminative clues from the features, enhancing the ultimate tracking performance. Based on this, we conduct extensive experiments across two multi-modal tracking tasks, three baseline methods, and four challenging benchmarks. The experimental results demonstrate that the proposed generative-based fusion mechanism achieves state-of-the-art performance by setting new records on GTOT, LasHeR and RGBD1K. Code will be available at https://github.com/Zhangyong-Tang/GMMT.
Total citations
Scholar articles
Z Tang, T Xu, X Wu, XF Zhu, J Kittler - Proceedings of the AAAI Conference on Artificial …, 2024