Ergodicity

From Wikipedia, the free encyclopedia

In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.

Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space.

Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients.

The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis.

Informal explanation[]

Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical description, that of the measure-preserving dynamical system. An informal description of this, and a definition of ergodicity with respect to it, is given immediately below. This is followed by a description of ergodicity in stochastic processes. They are one and the same, despite using dramatically different notation and language.

Measure-preserving dynamical systems[]

The mathematical definition of ergodicity aims to capture ordinary every-day ideas about randomness. This includes ideas about systems that move in such a way as to (eventually) fill up all of space, such as diffusion and Brownian motion, as well as common-sense notions of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, the dust in Saturn's rings and so on. To provide a solid mathematical footing, descriptions of ergodic systems begin with the definition of a measure-preserving dynamical system. This is written as

The set is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure is understood to define the natural volume of the space and of its subspaces. The collection of subspaces is denoted by , and the size of any given subset is ; the size is its volume. Naively, one could imagine to be the power set of ; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach-Tarski paradox). Thus, conventionally, consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements of open sets; these can always be taken to be measurable.

The time evolution of the system is described by a map . Given some subset , its map will in general be a deformed version of – it is squashed or stretched, folded or cut into pieces. Mathematical examples include the baker's map and the horseshoe map, both inspired by bread-making. The set must have the same volume as ; the squashing/stretching does not alter the volume of the space, only its distribution. Such a system is "measure-preserving" (area-preserving, volume-preserving).

A formal difficulty arises when one tries to reconcile the volume of sets with the need to preserve their size under a map. The problem arises because, in general, several different points in the domain of a function can map to the same point in its range; that is, there may be with . Worse, a single point has no size. These difficulties can be avoided by working with the inverse map ; it will map any given subset to the parts that were assembled to make it: these parts are . It has the important property of not losing track of where things came from. More strongly, it has the important property that any (measure-preserving) map is the inverse of some map . The proper definition of a volume-preserving map is one for which because describes all the pieces-parts that came from.

One is now interested in studying the time evolution of the system. If a set eventually comes to fill all of over a long period of time (that is, if approaches all of for large ), the system is said to be ergodic. If every set behaves in this way, the system is a conservative system, placed in contrast to a dissipative system, where some subsets wander away, never to be returned to. An example would be water running downhill: once it's run down, it will never come back up again. The lake that forms at the bottom of this river can, however, become well-mixed. The ergodic decomposition theorem states that every ergodic system can be split into two parts: the conservative part, and the dissipative part.

Mixing is a stronger statement than ergodicity. Mixing asks for this ergodic property to hold between any two sets , and not just between some set and . That is, given any two sets , a system is said to be (topologically) mixing if there is an integer such that, for all and , one has that . Here, denotes set intersection and is the empty set. Other notions of mixing include strong and weak mixing, which describe the notion that the mixed substances intermingle everywhere, in equal proportion. This can be non-trivial, as practical experience of trying to mix sticky, gooey substances shows.

Processes[]

The above discussion appeals to a physical sense of a volume. The volume does not have to literally be some portion of 3D space; it can be some abstract volume. This is generally the case in statistical systems, where the volume (the measure) is given by the probability. The total volume corresponds to probability one. This correspondence works because the axioms of probability theory are identical to those of measure theory; these are the Kolmogorov axioms.[citation needed]

The idea of a volume can be very abstract. Consider, for example, the set of all possible coin-flips: the set of infinite sequences of heads and tails. Assigning the volume of 1 to this space, it is clear that half of all such sequences start with heads, and half start with tails. One can slice up this volume in other ways: one can say "I don't care about the first coin-flips; but I want the 'th of them to be heads, and then I don't care about what comes after that". This can be written as the set where is "don't care" and is "heads". The volume of this space is again (obviously!) one-half.

The above is enough to build up a measure-preserving dynamical system, in its entirety. The sets of or occurring in the 'th place are called cylinder sets. The set of all possible intersections, unions and complements of the cylinder sets then form the Borel set defined above. In formal terms, the cylinder sets form the base for a topology on the space of all possible infinite-length coin-flips. The measure has all of the common-sense properties one might hope for: the measure of a cylinder set with in the 'th position, and in the 'th position is obviously 1/4, and so on. These common-sense properties persist for set-complement and set-union: everything except for and in locations and obviously has the volume of 3/4. All together, these form the axioms of a sigma-additive measure; measure-preserving dynamical systems always use sigma-additive measures. For coin flips, this measure is called the Bernoulli measure.

For the coin-flip process, the time-evolution operator is the shift operator that says "throw away the first coin-flip, and keep the rest". Formally, if is a sequence of coin-flips, then . The measure is obviously shift-invariant: as long as we are talking about some set where the first coin-flip is the "don't care" value, then the volume does not change: . In order to avoid talking about the first coin-flip, it is easier to define as inserting a "don't care" value into the first position: . With this definition, one obviously has that with no constraints on . This is again an example of why is used in the formal definitions.

The above development takes a random process, the Bernoulli process, and converts it to a measure-preserving dynamical system The same conversion (equivalence, isomorphism) can be applied to any stochastic process. Thus, an informal definition of ergodicity is that a sequence is ergodic if it visits all of ; such sequences are "typical" for the process. Another is that its statistical properties can be deduced from a single, sufficiently long, random sample of the process (thus uniformly sampling all of ), or that any collection of random samples from a process must represent the average statistical properties of the entire process (that is, samples drawn uniformly from are representative of as a whole.) In the present example, a sequence of coin flips, where half are heads, and half are tails, is a "typical" sequence.

There are several important points to be made about the Bernoulli process. If one writes 0 for tails and 1 for heads, one gets the set of all infinite strings of binary digits. These correspond to the base-two expansion of real numbers. Explicitly, given a sequence , the corresponding real number is

The statement that the Bernoulli process is ergodic is equivalent to the statement that the real numbers are uniformly distributed. The set of all such strings can be written in a variety of ways: This set is the Cantor set, sometimes called the Cantor space to avoid confusion with the Cantor function

In the end, these are all "the same thing".

The Cantor set plays key roles in many branches of mathematics. In recreational mathematics, it underpins the period-doubling fractals; in analysis, it appears in a vast variety of theorems. A key one for stochastic processes is the Wold decomposition, which states that any stationary process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.

The Ornstein isomorphism theorem states that every stationary stochastic process is equivalent to a Bernoulli scheme (a Bernoulli process with an N-sided (and possibly unfair) gaming die). Other results include that every non-dissipative ergodic system is equivalent to the Markov odometer, sometimes called an "adding machine" because it looks like elementary-school addition, that is, taking a base-N digit sequence, adding one, and propagating the carry bits. The proof of equivalence is very abstract; understanding the result is not: by adding one at each time step, every possible state of the odometer is visited, until it rolls over, and starts again. Likewise, ergodic systems visit each state, uniformly, moving on to the next, until they have all been visited.

Systems that generate (infinite) sequences of N letters are studied by means of symbolic dynamics. Important special cases include subshifts of finite type and sofic systems.

History and etymology[]

The term ergodic is commonly thought to derive from the Greek words ἔργον (ergon: "work") and ὁδός (hodos: "path", "way"), as chosen by Ludwig Boltzmann while he was working on a problem in statistical mechanics.[1] At the same time it is also claimed to be a derivation of ergomonode, coined by Boltzmann in a relatively obscure paper from 1884. The etymology appears to be contested in other ways as well.[2]

The idea of ergodicity was born in the field of thermodynamics, where it was necessary to relate the individual states of gas molecules to the temperature of a gas as a whole and its time evolution thereof. In order to do this, it was necessary to state what exactly it means for gases to mix well together, so that thermodynamic equilibrium could be defined with mathematical rigor. Once the theory was well developed in physics, it was rapidly formalized and extended, so that ergodic theory has long been an independent area of mathematics in itself. As part of that progression, more than one slightly different definition of ergodicity and multitudes of interpretations of the concept in different fields coexist.[citation needed]

For example, in classical physics the term implies that a system satisfies the ergodic hypothesis of thermodynamics,[3] the relevant state space being position and momentum space.

In dynamical systems theory the state space is usually taken to be a more general phase space. On the other hand in coding theory the state space is often discrete in both time and state, with less concomitant structure. In all those fields the ideas of time average and ensemble average can also carry extra baggage as well—as is the case with the many possible thermodynamically relevant partition functions used to define ensemble averages in physics, back again. As such the measure theoretic formalization of the concept also serves as a unifying discipline. In 1913 Michel Plancherel proved the strict impossibility for ergodicity for a purely mechanical system.[citation needed]

Occurrence[]

A review of ergodicity in physics, and in geometry follows. In all cases, the notion of ergodicity is exactly the same as that for dynamical systems; there is no difference, except for outlook, notation, style of thinking and the journals where results are published.

In physics[]

Physical systems can be split into three categories: classical mechanics, which describes machines with a finite number of moving parts, quantum mechanics, which describes the structure of atoms, and statistical mechanics, which describes gases, liquids, solids; this includes condensed matter physics. The case of classical mechanics is discussed in the next section, on ergodicity in geometry. As to quantum mechanics, although there is a conception of quantum chaos, there is no clear definition of ergodocity; what this might be is hotly debated. This section reviews ergodicity in statistical mechanics.

The above abstract definition of a volume is required as the appropriate setting for definitions of ergodicity in physics. Consider a container of liquid, or gas, or plasma, or other collection of atoms or particles. Each and every particle has a 3D position, and a 3D velocity, and is thus described by six numbers: a point in six-dimensional space If there are of these particles in the system, a complete description requires numbers. Any one system is just a single point in The physical system is not all of , of course; if it's a box of width, height and length then a point is in Nor can velocities be infinite: they are scaled by some probability measure, for example the Boltzmann–Gibbs measure for a gas. None-the-less, for close to Avogadro's number, this is obviously a very large space. This space is called the canonical ensemble.

A physical system is said to be ergodic if any representative point of the system eventually comes to visit the entire volume of the system. For the above example, this implies that any given atom not only visits every part of the box with uniform probability, but it does so with every possible velocity, with probability given by the Boltzmann distribution for that velocity (so, uniform with respect to that measure). The ergodic hypothesis states that physical systems actually are ergodic. Multiple time scales are at work: gasses and liquids appear to be ergodic over short time scales. Ergodicity in a solid can be viewed in terms of the vibrational modes or phonons, as obviously the atoms in a solid do not exchange locations. Glasses present a challenge to the ergodic hypothesis; time scales are assumed to be in the millions of years, but results are contentious. Spin glasses present particular difficulties.

Formal mathematical proofs of ergodicity in statistical physics are hard to come by; most high-dimensional many-body systems are assumed to be ergodic, without mathematical proof. Exceptions include the dynamical billiards, which model billiard ball-type collisions of atoms in an ideal gas or plasma. The first hard-sphere ergodicity theorem was for Sinai's billiards, which considers two balls, one of them taken as being stationary, at the origin. As the second ball collides, it moves away; applying periodic boundary conditions, it then returns to collide again. By appeal to homogeneity, this return of the "second" ball can instead be taken to be "just some other atom" that has come into range, and is moving to collide with the atom at the origin (which can be taken to be just "any other atom".) This is one of the few formal proofs that exist; there are no equivalent statements e.g. for atoms in a liquid, interacting via van der Waals forces, even if it would be common sense to believe that such systems are ergodic (and mixing). More precise physical arguments can be made, though.

In geometry[]

Ergodicity is a wide-spread phenomenon in the study of Riemannian manifolds. A quick sequence of examples, from simple to complicated, illustrates this point.

The geodesic flow of a flat torus following any irrational direction is ergodic; informally this means that when drawing a straight line in a square staring at any point, and with an irrational angle with respect to the sides, if every time one meets a side one starts over on the opposite side with the same angle, the line will eventually meet every subset of positive measure. More generally on any flat surface there are many ergodic directions for the geodesic flow.

There are similar results for negatively curved compact Riemann surfaces; note that in this case the definition of geodesic flow is much more involved since there is no notion of constant direction on a non-flat surface. More generally the geodesic flow on a negatively curved compact Riemannian manifolds is ergodic, in fact it satisfies the stronger property of being an Anosov flow.

In finance[]

Ergodicity is widely observed in finance and investment, and many theories in these fields assume ergodicity, explicitly or implicitly. The ergodic assumption is prevalent in modern portfolio theory, discounted cash flow (DCF) models, and aggregate indicator models that infuse macroeconomics, among others.

The situations modeled by these theories can be useful. But often they are only useful during much, but not all, of any particular time period under study. They can therefore miss some of the largest deviations from the standard model, such as financial crises, debt crises and systemic risk in the banking system that occur only infrequently.

Nassim Nicholas Taleb has pointed out that a very important part of empirical reality in finance and investment is non-ergodic. An even statistical distribution of probabilities, where the system returns to every possible state an infinite number of times, is simply not the case we observe in situations where “absorbing states" are reached, a state where ruin is seen. The death of an individual, or total loss of everything, or the devolution or dismemberment of a nation state and the legal regime that accompanied it, are all absorbing states. Thus, in finance, path dependence matters. A path where an individual, firm or country hits a "stop"—an absorbing barrier, "anything that prevents people with skin in the game from emerging from it, and to which the system will invariably tend. Let us call these situations ruin, as the entity cannot emerge from the condition. The central problem is that if there is a possibility of ruin, cost benefit analyses are no longer possible."[4]—will be non-ergodic. All traditional models based on standard probabilistic statistics break down in these extreme situations.

Definition for discrete-time systems[]

Formal definition[]

Let be a measurable space. If is a measurable function from to itself and a probability measure on then we say that is -ergodic or is an ergodic measure for if preserves and the following condition holds:

For any such that either or .

In other words there are no -invariant subsets up to measure 0 (with respect to ). Recall that preserving (or being -invariant) means that for all (see also measure-preserving dynamical system).

Examples[]

The simplest example is when is a finite set and the counting measure. Then a self-map of preserves if and only if it is a bijection, and it is ergodic if and only if has only one orbit (that is, for every there exists such that ). For example, if then the cycle is ergodic, but the permutation is not (it has the two invariant subsets and ).

Equivalent formulations[]

The definition given above admits the following immediate reformulations:

  • for every with we have or (where denotes the symmetric difference);
  • for every with positive measure we have ;
  • for every two sets of positive measure, there exists such that ;
  • Every measurable function with is constant on a subset of full measure.

Importantly for applications, the condition in the last characterisation can be restricted to square-integrable functions only:

  • If and then is constant almost everywhere.

Further examples[]

Bernoulli shifts and subshifts[]

Let be a finite set and with the product measure (each factor being endowed with its counting measure). Then the shift operator defined by is -ergodic.[5]

There are many more ergodic measures for the shift map on . Periodic sequences give finitely supported measures. More interestingly, there are infinitely-supported ones which are subshifts of finite type.

Irrational rotations[]

Let be the unit circle , with its Lebesgue measure . For any the rotation of of angle is given by . If then is not ergodic for the Lebesgue measure as it has infinitely many finite orbits. On the other hand, if is irrational then is ergodic.[6]

Arnold's cat map[]

Let be the 2-torus. Then any element defines a self-map of since . When one obtains the so-called Arnold's cat map, which is ergodic for the Lebesgue measure on the torus.

Ergodic theorems[]

If is a probability measure on a space which is ergodic for a transformation the pointwise ergodic theorem of G. Birkhoff states that for every measurable functions and for -almost every point the time average on the orbit of converges to the space average of . Formally this means that

The mean ergodic theorem of J. von Neumann is a similar, weaker statement about averaged translates of square-integrable functions.

Related properties[]

Dense orbits[]

An immediate consequence of the definition of ergodicity is that on a topological space , and if is the σ-algebra of Borel sets, if is -ergodic then -almost every orbit of is dense in the support of .

This is not an equivalence since for a transformation which is not uniquely ergodic, but for which there is an ergodic measure with full support , for any other ergodic measure the measure is not ergodic for but its orbits are dense in the support. Explicit examples can be constructed with shift-invariant measures.[7]

Mixing[]

A transformation of a probability measure space is said to be mixing for the measure if for any measurable sets the following holds:

It is immediate that a mixing transformation is also ergodic (taking to be a -stable subset and its complement). The converse is not true, for example a rotation with irrational angle on the circle (which is ergodic per the examples above) is not mixing (for a sufficiently small interval its successive images will not intersect itself most of the time). Bernoulli shifts are mixing, and so is Arnold's cat map.

This notion of mixing is sometimes called strong mixing, as opposed to weak mixing which means that

Proper ergodicity[]

The transformation is said to be properly ergodic if it does not have an orbit of full measure. In the discrete case this means that the measure is not supported on a finite orbit of .

Definition for continuous-time dynamical systems[]

The definition is essentially the same for continuous-time dynamical systems as for a single transformation. Let be a measurable space and for each , then such a system is given by a family of measurable functions from to itself, so that for any the relation holds (usually it is also asked that the orbit map from is also measurable). If is a probability measure on then we say that is -ergodic or is an ergodic measure for if each preserves and the following condition holds:

For any , if for all we have then either or .

Examples[]

As in the discrete case the simplest example is that of a transitive action, for instance the action on the circle given by is ergodic for Lebesgue measure.

An example with infinitely many orbits is given by the flow along an irrational slope on the torus: let and . Let ; then if this is ergodic for the Lebesgue measure.

Ergodic flows[]

Further examples of ergodic flows are:

  • Billiards in convex Euclidean domains;
  • the geodesic flow of a negatively curved Riemannian manifold of finite volume is ergodic (for the normalised volume measure);
  • the horocycle flow on a hyperbolic manifold of finite volume is ergodic (for the normalised volume measure)

Ergodicity in compact metric spaces[]

If is a compact metric space it is naturally endowed with the σ-algebra of Borel sets. The additional structure coming from the topology then allows a much more detailed theory for ergodic transformations and measures on .

Functional analysis interpretation[]

A very powerful alternate definition of ergodic measures can be given using the theory of Banach spaces. Radon measures on form a Banach space of which the set of probability measures on is a convex subset. Given a continuous transformation of the subset of -invariant measures is a closed convex subset, and a measure is ergodic for if and only if it is an extreme point of this convex.[8]

Existence of ergodic measures[]

In the setting above it follows from the Banach-Alaoglu theorem that there always exists extremal points in . Hence a transformation of a compact metric space always admits ergodic measures.

Ergodic decomposition[]

In general an invariant measure need not be ergodic, but as a consequence of Choquet theory it can always be expressed as the barycenter of a probability measure on the set of ergodic measures. This is referred to as the ergodic decomposition of the measure.[9]

Example[]

In the case of and the counting measure is not ergodic. The ergodic measures for are the uniform measures supported on the subsets and and every -invariant probability measure can be written in the form for some . In particular is the ergodic decomposition of the counting measure.

Continuous systems[]

Everything in this section transfers verbatim to continuous actions of or on compact metric spaces.

Unique ergodicity[]

The transformation is said to be uniquely ergodic if there is a unique Borel probability measure on which is ergodic for .

In the examples considered above, irrational rotations of the circle are uniquely ergodic;[10] shift maps are not.

Probabilistic interpretation: ergodic processes[]

If is a discrete-time stochastic process on a space , it is said to be ergodic if the joint distribution of the variables on is invariant under the shift map . This is a particular case of the notions discussed above.

The simplest case is that of an independent and identically distributed process which corresponds to the shift map described above. Another important case is that of a Markov chain which is discussed in detail below.

A similar interpretation holds for continuous-time stochastic processes though the construction of the measurable structure of the action is more complicated.

Ergodicity of Markov chains[]

The dynamical system associated with a Markov chain[]

Let be a finite set. A Markov chain on is defined by a matrix , where is the transition probability from to , so for every we have . A stationary measure for is a probability measure on such that  ; that is for all .

Using this data we can define a probability measure on the set with its product σ-algebra by giving the measures of the cylinders as follows:

Stationarity of then means that the measure is invariant under the shift map .

Criterion for ergodicity[]

The measure is always ergodic for the shift map if the associated Markov chain is irreducible (any state can be reached with positive probability from any other state in a finite number of steps).[11]

The hypotheses above imply that there is a unique stationary measure for the Markov chain. In terms of the matrix a sufficient condition for this is that 1 be a simple eigenvalue of the matrix and all other eigenvalues of (in ) are of modulus <1.

Note that in probability theory the Markov chain is called ergodic if in addition each state is aperiodic (the times where the return probability is positive are not multiples of a single integer >1). This is not necessary for the invariant measure to be ergodic; hence the notions of "ergodicity" for a Markov chain and the associated shift-invariant measure are different (the one for the chain is strictly stronger).[12]

Moreover the criterion is an "if and only if" if all communicating classes in the chain are recurrent and we consider all stationary measures.

Examples[]

Counting measure[]

If for all then the stationary measure is the counting measure, the measure is the product of counting measures. The Markov chain is ergodic, so the shift example from above is a special case of the criterion.

Non-ergodic Markov chains[]

Markov chains with recurring communicating classes are not irreducible are not ergodic, and this can be seen immediately as follows. If are two distinct recurrent communicating classes there are nonzero stationary measures supported on respectively and the subsets and are both shift-invariant and of measure 1.2 for the invariant probability measure . A very simple example of that is the chain on given by the matrix (both states are stationary).

A periodic chain[]

The Markov chain on given by the matrix is irreducible but periodic. Thus it is not ergodic in the sense of Markov chain though the associated measure on is ergodic for the shift map. However the shift is not mixing for this measure, as for the sets

and

we have but

Generalisations[]

The definition of ergodicity also makes sense for group actions. The classical theory (for invertible transformations) corresponds to actions of or .

For non-abelian groups there might not be invariant measures even on compact metric spaces. However the definition of ergodicity carries over unchanged if one replaces invariant measures by quasi-invariant measures.

Important examples are the action of a semisimple Lie group (or a lattice therein) on its Furstenberg boundary.

A measurable equivalence relation it is said to be ergodic if all saturated subsets are either null or conull.

Notes[]

  1. ^ Walters 1982, §0.1, p. 2
  2. ^ Gallavotti, Giovanni (1995). "Ergodicity, ensembles, irreversibility in Boltzmann and beyond". Journal of Statistical Physics. 78 (5–6): 1571–1589. arXiv:chao-dyn/9403004. Bibcode:1995JSP....78.1571G. doi:10.1007/BF02180143. S2CID 17605281.
  3. ^ Feller, William (1 August 2008). An Introduction to Probability Theory and Its Applications (2nd ed.). Wiley India Pvt. Limited. p. 271. ISBN 978-81-265-1806-7.
  4. ^ Taleb, Nassim Nicholas (2019), "Probability, Risk, and Extremes", in Needham, Duncan (ed.), Extremes, Cambridge University Press, pp. 46–66
  5. ^ Walters 1982, p. 32.
  6. ^ Walters 1982, p. 29.
  7. ^ "Example of a measure-preserving system with dense orbits that is not ergodic". MathOverflow. September 1, 2011. Retrieved May 16, 2020.
  8. ^ Walters 1982, p. 152.
  9. ^ Walters 1982, p. 153.
  10. ^ Walters 1982, p. 159.
  11. ^ Walters 1982, p. 42.
  12. ^ "Different uses of the word "ergodic"". MathOverflow. September 4, 2011. Retrieved May 16, 2020.

References[]

  • Walters, Peter (1982). An Introduction to Ergodic Theory. Springer. ISBN 0-387-95152-0.
  • Brin, Michael; Garrett, Stuck (2002). Introduction to Dynamical Systems. Cambridge University Press. ISBN 0-521-80841-3.

External links[]

Retrieved from ""