Authors
TM Dado, Paolo Papale, Antonio Lozano, Lynn Le, MAJ van Gerven, PR Roelfsema, Y Güçlütürk, U Güçlü
Publication date
2023
Publisher
Sl: sn
Description
Here, we aimed to explain neural representations of perception, for which we analyzed the relationship between multi-unit activity (MUA) recorded from the primate brain and various feature representations of visual stimuli. Our encoding analysis revealed that the -latent representations of feature-disentangled generative adversarial networks (GANs) were the most effective candidate for predicting neural responses to images. Importantly, the usage of synthesized yet photorealistic images allowed for superior control over these data as their underlying latent representations were known a priori rather than approximated post-hoc. As such, we leveraged this property in neural reconstruction of the perceived images. Taken together with the fact that the (unsupervised) generative models themselves were never optimized on neural data, these results highlight the importance of feature disentanglement and unsupervised training as driving factors in shaping neural representations.
Scholar articles