Authors
George Papandreou, Liang-Chieh Chen, Kevin P Murphy, Alan L Yuille
Publication date
2015
Conference
Proceedings of the IEEE international conference on computer vision
Pages
1742-1750
Description
Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https://bitbucket. org/deeplab/deeplab-public.
Total citations
20152016201720182019202020212022202320242510513515920924521020114865
Scholar articles
G Papandreou, LC Chen, KP Murphy, AL Yuille - Proceedings of the IEEE international conference on …, 2015