Authors
Krishna Chaitanya, Ertunc Erdil, Neerav Karani, Ender Konukoglu
Publication date
2020
Journal
Advances in neural information processing systems
Volume
33
Pages
12546-12558
Description
A key requirement for the success of supervised deep learning is a large labeled dataset-a condition that is difficult to meet in medical image analysis. Self-supervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8\% of benchmark performance using only two labeled MRI volumes for training. The code is made public at https://github. com/krishnabits001/domain_specific_cl.
Total citations
202120222023202469157175110
Scholar articles
K Chaitanya, E Erdil, N Karani, E Konukoglu - Advances in neural information processing systems, 2020