Authors
Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle Martin-Raugh, Harrison Kell, Chong Min Lee, Su-Youn Yoon
Publication date
2016/10/31
Book
Proceedings of the 18th ACM international conference on multimodal interaction
Pages
161-168
Description
As the popularity of video-based job interviews rises, so does the need for automated tools to evaluate interview performance. Real world hiring decisions are based on assessments of knowledge and skills as well as holistic judgments of person-job fit. While previous research on automated scoring of interview videos shows promise, it lacks coverage of monologue-style responses to structured interview (SI) questions and content-focused interview rating. We report the development of a standardized video interview protocol as well as human rating rubrics focusing on verbal content, personality, and holistic judgment. A novel feature extraction method using ``visual words" automatically learned from video analysis outputs and the Doc2Vec paradigm is proposed. Our promising experimental results suggest that this novel method provides effective representations for the automated scoring of interview videos.
Total citations
2017201820192020202120222023202451011768102
Scholar articles
L Chen, G Feng, CW Leong, B Lehman… - Proceedings of the 18th ACM international conference …, 2016