Authors
Jiang Wang, Zicheng Liu, Ying Wu, Junsong Yuan
Publication date
2012/6/16
Conference
2012 IEEE Conference on Computer Vision and Pattern Recognition
Pages
1290-1297
Publisher
IEEE
Description
Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another …
Total citations
201120122013201420152016201720182019202020212022202320248138815419623619621618215816912011860
Scholar articles
J Wang, Z Liu, Y Wu, J Yuan - 2012 IEEE conference on computer vision and pattern …, 2012