Infomax Explained

Infomax is an optimization principle for artificial neural networks and other information processing systems. It prescribes that a function that maps a set of input values I to a set of output values O should be chosen or learned so as to maximize the average Shannon mutual information between I and O, subject to a set of specified constraints and/or noise processes. Infomax algorithms are learning algorithms that perform this optimization process. The principle was described by Linsker in 1988.[1]

Infomax, in its zero-noise limit, is related to the principle of redundancy reduction proposed for biological sensory processing by Horace Barlow in 1961,[2] and applied quantitatively to retinal processing by Atick and Redlich.[3]

One of the applications of infomax has been to an independent component analysis algorithm that finds independent signals by maximizing entropy. Infomax-based ICA was described by Bell and Sejnowski, and Nadal and Parga in 1995.[4] [5]

See also

References

Notes and References

  1. 10.1109/2.36 . Linsker R . Self-organization in a perceptual network . IEEE Computer . 21 . 3 . 105–17 . 1988 . 1527671 .
  2. Book: Barlow, H. . Possible principles underlying the transformations of sensory messages . Rosenblith, W. . Sensory Communication . MIT Press . Cambridge MA . 1961 . 217–234 .
  3. 10.1162/neco.1992.4.2.196 . Atick JJ, Redlich AN . What does the retina know about natural scenes? . Neural Computation . 4 . 196–210 . 1992 . 2 . 17515861 .
  4. 10.1162/neco.1995.7.6.1129 . Bell AJ, Sejnowski TJ . An information-maximization approach to blind separation and blind deconvolution . Neural Comput . 7 . 6 . 1129–59 . November 1995 . 7584893 . 10.1.1.36.6605 . 1701422 .
  5. Book: Nadal J.P., Parga N. . Sensory coding: information maximization and redundancy reduction . Neural Information Processing . G.. Burdet. P.. Combe. O.. Parodi . World Scientific Series in Mathematical Biology and Medicine . Singapore . World Scientific . 7 . 164–171 . 1999 .