Authors
Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych
Publication date
2021/4/16
Journal
EMNLP 2021
Description
Intermediate task fine-tuning has been shown to culminate in large transfer gains across many NLP tasks. With an abundance of candidate datasets as well as pre-trained language models, it has become infeasible to run the cross-product of all combinations to find the best transfer setting. In this work we first establish that similar sequential fine-tuning gains can be achieved in adapter settings, and subsequently consolidate previously proposed methods that efficiently identify beneficial tasks for intermediate transfer learning. We experiment with a diverse set of 42 intermediate and 11 target English classification, multiple choice, question answering, and sequence tagging tasks. Our results show that efficient embedding based methods that rely solely on the respective datasets outperform computational expensive few-shot fine-tuning approaches. Our best methods achieve an average Regret@3 of less than 1% across all target tasks, demonstrating that we are able to efficiently identify the best datasets for intermediate training.
Total citations
20202021202220232024110323413
Scholar articles
C Poth, J Pfeiffer, A Rücklé, I Gurevych - arXiv preprint arXiv:2104.08247, 2021