Authors
Tero Karras, Timo Aila, Samuli Laine, Antti Herva, Jaakko Lehtinen
Publication date
2017/7/20
Journal
ACM Transactions on Graphics (ToG)
Volume
36
Issue
4
Pages
1-12
Publisher
ACM
Description
We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.
We train our network with 3--5 minutes of high-quality animation data obtained using traditional, vision-based performance capture methods. Even though our primary goal is to model the speaking style of a single actor, our model yields reasonable results even when driven with audio from other speakers with different gender, accent, or language, as we demonstrate with a user study. The results are applicable to in-game …
Total citations
201720182019202020212022202320244245350576611074
Scholar articles
T Karras, T Aila, S Laine, A Herva, J Lehtinen - ACM Transactions on Graphics (ToG), 2017