Authors
Matthew Hill, Carlos Castillo, Alice O'Toole
Publication date
2022/12/5
Journal
Journal of Vision
Volume
22
Issue
14
Pages
4369-4369
Publisher
The Association for Research in Vision and Ophthalmology
Description
Deep Convolutional Neural Networks (DCNNs) recognize faces over image/appearance variation (eg viewpoint, illumination, expression) while retaining information about this variation (Hill et al., 2019). Here, we examined the extent to which DCNNs encode the 3D shape and surface reflectance properties of a face in the presence of challenging image variability. We generated synthetic face images using parametric models of 3D face shape and surface reflectance. The FLAME model uses a linear shape space to generate faces by varying 3D shape and expression parametrically (Li et al., 2017). The reflectance model was based on the Basel Face Model (Paysan et al., 2009). We generated five face shapes and five reflectance maps to create 25 unique faces by combining all possible pairs of shapes and reflectance maps. The stimulus set (n= 1,125) consisted of images of these faces rendered at 9 viewpoints …
Scholar articles