Authors
Bharath Hariharan, Pablo Arbeláez, Ross Girshick, Jitendra Malik
Publication date
2015
Conference
Proceedings of the IEEE conference on computer vision and pattern recognition
Pages
447-456
Description
Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as feature representation. However, the information in this layer may be too coarse to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [20], where we improve state-of-the-art from 49.7 mean AP^ r [20] to 59.0, keypoint localization, where we get a 3.3 point boost over [19] and part labeling, where we show a 6.6 point gain over a strong baseline.
Total citations
2014201520162017201820192020202120222023202465318329429026923119016713150
Scholar articles
B Hariharan, P Arbeláez, R Girshick, J Malik - Proceedings of the IEEE conference on computer …, 2015