Authors
Oren Shriki, Haim Sompolinsky, Daniel D Lee
Publication date
2000/11/27
Journal
Advances in Neural Information Processing Systems
Volume
13
Pages
612-618
Description
The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model con (cid: 173) sists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is de (cid: 173) terministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feed (cid: 173) forward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent con (cid: 173) nections. The application of these new learning rules is illustrated on a simple two-dimensional input example.
Total citations
20012002200320042005200620072008200920102011201220132014201520162017201820192020202120222023202411723215112141111
Scholar articles
O Shriki, H Sompolinsky, D Lee - Advances in neural information processing systems, 2000