Authors
Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani, Yoichi Sato
Publication date
2018
Conference
IEEE Conference on Computer Vision and Pattern Recognition
Description
We present a new task that predicts future locations of people observed in first-person videos. Consider a first-person video stream continuously recorded by a wearable camera. Given a short clip of a person that is extracted from the complete stream, we aim to predict that person's location in future frames. To facilitate this future person localization ability, we make the following three key observations: a) First-person videos typically involve significant ego-motion which greatly affects the location of the target person in future frames; b) Scales of the target person act as a salient cue to estimate a perspective effect in first-person videos; c) First-person videos often capture people up-close, making it easier to leverage target poses (eg, where they look) for predicting their future locations. We incorporate these three observations into a prediction framework with a multi-stream convolution-deconvolution architecture. Experimental results reveal our method to be effective on our new dataset as well as on a public social interaction dataset.
Total citations
20182019202020212022202320243234635454513
Scholar articles
T Yagi, K Mangalam, R Yonetani, Y Sato - Proceedings of the IEEE Conference on Computer …, 2018