Manifold hypothesis

From Wikipedia, the free encyclopedia

In theoretical computer science and the study of machine learning, the manifold hypothesis is the hypothesis that many high-dimensional data sets that occur in the real world actually lie along low-dimensional manifolds inside that high-dimensional space.[1][2][3] As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features.

The manifold hypothesis is related to the effectiveness of nonlinear dimensionality reduction techniques in machine learning. Many techniques of dimensional reduction make the assumption that data lies along a low-dimensional submanifold, such as manifold sculpting, manifold alignment, and manifold regularization.

References[]

  1. ^ Cayton, L., 2005. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep, 12(1-17), p.1.
  2. ^ Fefferman, C., Mitter, S. and Narayanan, H., 2016. Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4), pp.983-1049.
  3. ^ Olah, Christopher. 2014. Blog: Neural Networks, Manifolds, and Topology. Available: https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
Retrieved from ""