In information theory, the cross-entropy between two probability distributions
and
over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution
, rather than the true distribution
.
Definition[]
The cross-entropy of the distribution
relative to a distribution
over a given set is defined as follows:
,
where
is the expected value operator with respect to the distribution
.
The definition may be formulated using the Kullback–Leibler divergence
, divergence of
from
(also known as the relative entropy of
with respect to
).
,
where
is the entropy of
.
For discrete probability distributions
and
with the same support
this means
![{\displaystyle H(p,q)=-\sum _{x\in {\mathcal {X}}}p(x)\,\log q(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c6b895514e10a3ce88773852cba1cb1e248ed763)
|
|
(Eq.1)
|
The situation for continuous distributions is analogous. We have to assume that
and
are absolutely continuous with respect to some reference measure
(usually
is a Lebesgue measure on a Borel σ-algebra). Let
and
be probability density functions of
and
with respect to
. Then
![{\displaystyle -\int _{\mathcal {X}}P(x)\,\log Q(x)\,dr(x)=\operatorname {E} _{p}[-\log Q]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f76631e4d31793fd2d2b3bb42796166b04fa4b2)
and therefore
![{\displaystyle H(p,q)=-\int _{\mathcal {X}}P(x)\,\log Q(x)\,dr(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c96018b6306901426015782f04705e4549590c55)
|
|
(Eq.2)
|
NB: The notation
is also used for a different concept, the joint entropy of
and
.
Motivation[]
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value
out of a set of possibilities
can be seen as representing an implicit probability distribution
over
, where
is the length of the code for
in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution
is assumed while the data actually follows a distribution
. That is why the expectation is taken over the true probability distribution
and not
. Indeed the expected message-length under the true distribution
is,
![{\displaystyle \operatorname {E} _{p}[l]=-\operatorname {E} _{p}\left[{\frac {\ln {q(x)}}{\ln(2)}}\right]=-\operatorname {E} _{p}\left[\log _{2}{q(x)}\right]=-\sum _{x_{i}}p(x_{i})\,\log _{2}{q(x_{i})}=-\sum _{x}p(x)\,\log _{2}q(x)=H(p,q)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c05adf7a909e1eb469224b9b21e01b8a0d9b2605)
Estimation[]
There are many situations where cross-entropy needs to be measured but the distribution of
is unknown. An example is language modeling, where a model is created based on a training set
, and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,
is the true distribution of words in any corpus, and
is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula:
![H(T,q)=-\sum _{{i=1}}^{N}{\frac {1}{N}}\log _{2}q(x_{i})](https://wikimedia.org/api/rest_v1/media/math/render/svg/bb11eae1b2b1120c2bcccf741a51c2511c0cbffe)
where
is the size of the test set, and
is the probability of event
estimated from the training set. In other words,
is the probability estimate of the model that the i-th word of the text is
. The sum is averaged over the
words of the test. This is a Monte Carlo estimate of the true cross-entropy, where the test set is treated as samples from
[citation needed].
Relation to log-likelihood[]
In classification problems we want to estimate the probability of different outcomes. Let the estimated probability of outcome
be
with to be optimized parameters
and let the frequency (empirical probability) of outcome
in the training set be
.
Given N conditionally independent samples in the training set, then the likelihood of the parameters
of the model
on the training set is
![{\displaystyle {\mathcal {L}}(\theta )=\prod _{i\in X}({\mbox{est. probability of }}i)^{{\mbox{number of occurrences of }}i}=\prod _{i}q_{\theta }(X=i)^{Np(X=i)}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d452937ed5a5bdfdac361ad273d0c55868f0cdd3)
so the log-likelihood, divided by
is
![{\displaystyle {\frac {1}{N}}\log({\mathcal {L}}(\theta ))={\frac {1}{N}}\log \prod _{i}q_{\theta }(X=i)^{Np(X=i)}=\sum _{i}p(X=i)\log q_{\theta }(X=i)=-H(p,q)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9601b9fef9a3e4c9bb43553b1b3a1d523c0f3dfa)
so that maximizing the likelihood with respect to the parameters
is the same as minimizing the cross-entropy.
Cross-entropy minimization[]
Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distribution
against a fixed reference distribution
, cross-entropy and KL divergence are identical up to an additive constant (since
is fixed): both take on their minimal values when
, which is
for KL divergence, and
for cross-entropy.[citation needed] In the engineering literature, the principle of minimising KL Divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent.
However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution
is the fixed prior reference distribution, and the distribution
is optimised to be as close to
as possible, subject to some constraint. In this case the two minimisations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be
, rather than
.
Cross-entropy loss function and logistic regression[]
Cross-entropy can be used to define a loss function in machine learning and optimization. The true probability
is the true label, and the given distribution
is the predicted value of the current model.
More specifically, consider logistic regression, which (among other things) can be used to classify observations into two possible classes (often simply labelled
and
). The output of the model for a given observation, given a vector of input features
, can be interpreted as a probability, which serves as the basis for classifying the observation. The probability is modeled using the logistic function
where
is some function of the input vector
, commonly just a linear function. The probability of the output
is given by
![{\displaystyle q_{y=1}\ =\ {\hat {y}}\ \equiv \ g(\mathbf {w} \cdot \mathbf {x} )\ =1/(1+e^{-\mathbf {w} \cdot \mathbf {x} }),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/13533a4487f3fc3b5ea96542467b338371c9eedc)
where the vector of weights
is optimized through some appropriate algorithm such as gradient descent. Similarly, the complementary probability of finding the output
is simply given by
![q_{{y=0}}\ =\ 1-{\hat {y}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2c30e8f9223b676407bc6b431d3ef55b46d2de4a)
Having set up our notation,
and
, we can use cross-entropy to get a measure of dissimilarity between
and
:
![H(p,q)\ =\ -\sum _{i}p_{i}\log q_{i}\ =\ -y\log {\hat {y}}-(1-y)\log(1-{\hat {y}})](https://wikimedia.org/api/rest_v1/media/math/render/svg/1f3f3acfb5549feb520216532a40082193c05ccc)
Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. For example, suppose we have
samples with each sample indexed by
. The average of the loss function is then given by:
![{\displaystyle {\begin{aligned}J(\mathbf {w} )\ &=\ {\frac {1}{N}}\sum _{n=1}^{N}H(p_{n},q_{n})\ =\ -{\frac {1}{N}}\sum _{n=1}^{N}\ {\bigg [}y_{n}\log {\hat {y}}_{n}+(1-y_{n})\log(1-{\hat {y}}_{n}){\bigg ]}\,,\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/80f87a71d3a616a0939f5360cec24d702d2593a2)
where
, with
the logistic function as before.
The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).[1]
Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared error loss for Linear regression. That is, define
Then we have the result
The proof is as follows. For any
, we have
In a similar way, we eventually obtain the desired result.
See also[]
References[]
- ^ Murphy, Kevin (2012). Machine Learning: A Probabilistic Perspective. MIT. ISBN 978-0262018029.
External links[]