Authors
Alessandro Ingrosso, Sebastian Goldt
Publication date
2022/9/26
Journal
Proceedings of the National Academy of Sciences
Volume
119
Issue
40
Pages
e2201854119
Description
Exploiting data invariances is crucial for efficient learning in both artificial and biological neural circuits. Understanding how neural networks can discover appropriate representations capable of harnessing the underlying symmetries of their inputs is thus crucial in machine learning and neuroscience. Convolutional neural networks, for example, were designed to exploit translation symmetry, and their capabilities triggered the first wave of deep learning successes. However, learning convolutions directly from translation-invariant data with a fully connected network has so far proven elusive. Here we show how initially fully connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localized, space-tiling receptive fields. These receptive fields match the filters of a convolutional network trained on the same task. By carefully designing data models for …
Total citations
20222023202441516
Scholar articles
A Ingrosso, S Goldt - Proceedings of the National Academy of Sciences, 2022