Authors
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, Jie Tang
Publication date
2021/3
Source
arXiv preprint arXiv:2103.10360
Volume
18
Publisher
Mar
Description
There have been various types of pretraining architectures including autoregressive models (eg, GPT), autoencoding models (eg, BERT), and encoder-decoder models (eg, T5). On the other hand, NLP tasks are different in nature, with three main categories being classification, unconditional generation, and conditional generation. However, none of the pretraining frameworks performs the best for all tasks, which introduces inconvenience for model development and selection. We propose a novel pretraining framework GLM (General Language Model) to address this challenge. Compared to previous work, our architecture has three major benefits:(1) it performs well on classification, unconditional generation, and conditional generation tasks with one single pretrained model;(2) it outperforms BERT-like models on classification due to improved pretrain-finetune consistency;(3) it naturally handles variable-length blank filling which is crucial for many downstream tasks. Empirically, GLM substantially outperforms BERT on the SuperGLUE natural language understanding benchmark with the same amount of pre-training data. Moreover, GLM with 1.25× parameters of BERTLarge achieves the best performance in NLU, conditional and unconditional generation at the same time, which demonstrates its generalizability to different downstream tasks. 1
Total citations
20212022202320249242011
Scholar articles