In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook.
It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for in density estimation.
Let the random variables and represent input and output messages with a joint probability. Let represent an occurrence of error; i.e., that , with being an approximate version of . Fano's inequality is
Define an indicator random variable , that indicates the event that our estimate is in error,
Consider . We can use the chain rule for entropies to expand this in two different ways
Equating the two
Expanding the right most term,
Since means ; being given the value of allows us to know the value of with certainty. This makes the term .
On the other hand, means that , hence given the value of , we can narrow down to one of different values, allowing us to upper bound the conditional entropy . Hence
The other term, , because conditioning reduces entropy. Because of the way is defined, , meaning that . Putting it all together,
Because is a Markov chain, we have by the data processing inequality, and hence , giving us
P. Assouad, "Deux remarques sur l'estimation", Comptes Rendus de l'Académie des Sciences de Paris, Vol. 296, pp. 1021–1024, 1983.
L. Birge, "Estimating a density under order restrictions: nonasymptotic minimax risk", Technical report, UER de Sciences Économiques, Universite Paris X, Nanterre, France, 1983.
L. Devroye, A Course in Density Estimation. Progress in probability and statistics, Vol 14. Boston, Birkhauser, 1987. ISBN0-8176-3365-0, ISBN3-7643-3365-0.
I. A. Ibragimov, R. Z. Has′minskii, Statistical estimation, asymptotic theory. Applications of Mathematics, vol. 16, Springer-Verlag, New York, 1981. ISBN0-387-90523-5