Authors
Xiao Wang, Yuhang Huang, Dan Zeng, Guo-Jun Qi
Publication date
2023/3/28
Journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Volume
45
Issue
9
Pages
10718-10730
Publisher
IEEE
Description
As a representative self-supervised method, contrastive learning has achieved great successes in unsupervised training of representations. It trains an encoder by distinguishing positive samples from negative ones given query anchors. These positive and negative samples play critical roles in defining the objective to learn the discriminative encoder, avoiding it from learning trivial features. While existing methods heuristically choose these samples, we present a principled method where both positive and negative samples are directly learnable end-to-end with the encoder. We show that the positive and negative samples can be cooperatively and adversarially learned by minimizing and maximizing the contrastive loss, respectively. This yields cooperative positives and adversarial negatives with respect to the encoder, which are updated to continuously track the learned representation of the query anchors …
Total citations
202220232024327
Scholar articles