Authors
Xiaodan Liang, Ke Gong, Xiaohui Shen, Liang Lin
Publication date
2018/3/29
Journal
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publisher
IEEE
Description
Human parsing and pose estimation have recently received considerable interest due to their substantial application potentials. However, the existing datasets have limited numbers of images and annotations and lack a variety of human appearances and coverage of challenging cases in unconstrained environments. In this paper, we introduce a new benchmark named “Look into Person (LIP)” that provides a significant advancement in terms of scalability, diversity, and difficulty, which are crucial for future developments in human-centric analysis. This comprehensive dataset contains over 50,000 elaborately annotated images with 19 semantic part labels and 16 body joints, which are captured from a broad range of viewpoints, occlusions, and background complexities. Using these rich annotations, we perform detailed analyses of the leading human parsing and pose estimation approaches, thereby obtaining …
Total citations
20182019202020212022202320249536979796450
Scholar articles
X Liang, K Gong, X Shen, L Lin - IEEE transactions on pattern analysis and machine …, 2018