Authors
Nghi DQ Bui, Yijun Yu, Lingxiao Jiang
Publication date
2021/7/11
Book
Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
Pages
511-521
Description
We propose Corder, a self-supervised contrastive learning framework for source code model. Corder is designed to alleviate the need of labeled data for code retrieval and code summarization tasks. The pre-trained model of Corder can be used in two ways: (1) it can produce vector representation of code which can be applied to code retrieval tasks that do not have labeled data; (2) it can be used in a fine-tuning process for tasks that might still require label data such as code summarization. The key innovation is that we train the source code model by asking it to recognize similar and dissimilar code snippets through acontrastive learning objective. To do so, we use a set of semantic-preserving transformation operators to generate code snippets that are syntactically diverse but semantically equivalent. Through extensive experiments, we have shown that the code models pretrained by Corder substantially …
Total citations
20212022202320244334120
Scholar articles