Information projection

From Wikipedia, the free encyclopedia

In information theory, the information projection or I-projection of a probability distribution q onto a set of distributions P is

where is the Kullback–Leibler divergence from q to p. Viewing the Kullback–Leibler divergence as a measure of distance, the I-projection is the "closest" distribution to q of all the distributions in P.

The I-projection is useful in setting up information geometry, notably because of the following inequality, valid when P is convex:[1]

This inequality can be interpreted as an information-geometric version of Pythagoras' triangle inequality theorem, where KL divergence is viewed as squared distance in a Euclidean space.

It is worthwhile to note that since and continuous in p, if P is closed and non-empty, then there exists at least one minimizer to the optimization problem framed above. Furthermore, if P is convex, then the optimum distribution is unique.

The reverse I-projection also known as moment projection or M-projection is

Since the KL divergence is not symmetric in its arguments, the I-projection and the M-projection will exhibit different behavior. For I-projection, will typically under-estimate the support of and will lock onto one of its modes. This is due to , whenever to make sure KL divergence stays finite. For M-projection, will typically over-estimate the support of . This is due to whenever to make sure KL divergence stays finite.


The concept of information projection can be extended to arbitrary statistical f-divergences and other divergences.[2]

See also[]

References[]

  1. ^ Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory (2 ed.). Hoboken, New Jersey: Wiley Interscience. pp. 367(theorem 11.6.1).
  2. ^ Nielsen, Frank (2018). "What is... an information projection?" (PDF). 65 (3). AMS: 321–324. Cite journal requires |journal= (help)


Retrieved from ""