Manifold hypothesis explained

The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space.[1] [2] [3] [4] As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features.

The manifold hypothesis is related to the effectiveness of nonlinear dimensionality reduction techniques in machine learning. Many techniques of dimensional reduction make the assumption that data lies along a low-dimensional submanifold, such as manifold sculpting, manifold alignment, and manifold regularization.

The major implications of this hypothesis is that

The ability to interpolate between samples is the key to generalization in deep learning.[5]

The information geometry of statistical manifolds

An empirically-motivated approach to the manifold hypothesis focuses on its correspondence with an effective theory for manifold learning under the assumption that robust machine learning requires encoding the dataset of interest using methods for data compression. This perspective gradually emerged using the tools of information geometry thanks to the coordinated effort of scientists working on the efficient coding hypothesis, predictive coding and variational Bayesian methods.

The argument for reasoning about the information geometry on the latent space of distributions rests upon the existence and uniqueness of the Fisher information metric.[6] In this general setting, we are trying to find a stochastic embedding of a statistical manifold. From the perspective of dynamical systems, in the big data regime this manifold generally exhibits certain properties such as homeostasis:

  1. We can sample large amounts of data from the underlying generative process.
  2. Machine Learning experiments are reproducible, so the statistics of the generating process exhibit stationarity.

In a sense made precise by theoretical neuroscientists working on the free energy principle, the statistical manifold in question possesses a Markov blanket.[7]

Further reading

Notes and References

  1. Gorban . A. N. . Tyukin . I. Y. . 2018 . Blessing of dimensionality: mathematical foundations of the statistical physics of data . Phil. Trans. R. Soc. A. . 15 . 3 . 20170237 . 10.1098/rsta.2017.0237. 29555807 . 5869543 . 2018RSPTA.37670237G .
  2. Cayton . L. . 2005 . Algorithms for manifold learning . University of California at San Diego . 12(1–17) . 1.
  3. Fefferman . Charles . Mitter . Sanjoy . Narayanan . Hariharan . 2016-02-09 . Testing the manifold hypothesis . Journal of the American Mathematical Society . 29 . 4 . 983–1049 . 10.1090/jams/852 . 50258911 . 1310.0425 .
  4. Web site: Olah . Christopher . 2014 . Blog: Neural Networks, Manifolds, and Topology .
  5. Book: Chollet, Francois . Deep Learning with Python . . 2021 . 9781617296864 . 2nd . 128–129.
  6. Ariel . Caticha . Geometry from Information Geometry . MaxEnt 2015, the 35th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. . 2015 . 1512.09076.
  7. Kirchhoff . Michael . Parr . Thomas . Palacios . Ensor . Friston . Karl . Kiverstein . Julian . 2018 . The Markov blankets of life: autonomy, active inference and the free energy principle . J. R. Soc. Interface . 15 . 138 . 20170792 . 10.1098/rsif.2017.0792. 29343629 . 5805980 .