Markov chains on a measurable state space

From Wikipedia, the free encyclopedia

A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.

History[]

The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used for stochastic processes with discrete or continuous index set, living on a countable or finite state space, see Doob.[1] or Chung.[2] Since the late 20th century it became more popular to consider a Markov chain as a stochastic process with discrete index set, living on a measurable state space.[3][4][5]

Definition[]

Denote with a measurable space and with a Markov kernel with source and target . A stochastic process on is called a time homogeneous Markov chain with Markov kernel and start distribution if

is satisfied for any . One can construct for any Markov kernel and any probability measure an associated Markov chain.[4]

Remark about Markov kernel integration[]

For any measure we denote for -integrable function the Lebesgue integral as . For the measure defined by we used the following notation:

Basic properties[]

Starting in a single point[]

If is a Dirac measure in , we denote for a Markov kernel with starting distribution the associated Markov chain as on and the expectation value

for a -integrable function . By definition, we have then .

We have for any measurable function the following relation:[4]

Family of Markov kernels[]

For a Markov kernel with starting distribution one can introduce a family of Markov kernels by

for and . For the associated Markov chain according to and one obtains

.

Stationary measure[]

A probability measure is called stationary measure of a Markov kernel if

holds for any . If on denotes the Markov chain according to a Markov kernel with stationary measure , and the distribution of is , then all have the same probability distribution, namely:

for any .

Reversibility[]

A Markov kernel is called reversible according to a probability measure if

holds for any . Replacing shows that if is reversible according to , then must be a stationary measure of .

See also[]

References[]

  1. ^ Joseph L. Doob: Stochastic Processes. New York: John Wiley & Sons, 1953.
  2. ^ Kai L. Chung: Markov Chains with Stationary Transition Probabilities. Second edition. Berlin: Springer-Verlag, 1974.
  3. ^ Sean Meyn and Richard L. Tweedie: Markov Chains and Stochastic Stability. 2nd edition, 2009.
  4. ^ a b c Daniel Revuz: Markov Chains. 2nd edition, 1984.
  5. ^ Rick Durrett: Probability: Theory and Examples. Fourth edition, 2005.
Retrieved from ""