Causal Markov condition

From Wikipedia, the free encyclopedia

The Markov condition, sometimes called the Markov assumption, is an assumption made in Bayesian probability theory, that every node in a Bayesian network is conditionally independent of its nondescendants, given its parents. Stated loosely, it is assumed that a node has no bearing on nodes which do not descend from it. In a DAG, this local Markov condition is equivalent to the global Markov condition, which states that d-separations in the graph also correspond to conditional independence relations.[1][2] This also means that a node is conditionally independent of the entire network, given its Markov blanket.

The related Causal Markov (CM) condition states that, conditional on the set of all its direct causes, a node is independent of all variables which are not direct causes or direct effects of that node.[3] In the event that the structure of a Bayesian network accurately depicts causality, the two conditions are equivalent. However, a network may accurately embody the Markov condition without depicting causality, in which case it should not be assumed to embody the causal Markov condition.

Definition[]

Let G be an acyclic causal graph (a graph in which each node appears only once along any path) with vertex set V and let P be a probability distribution over the vertices in V generated by G. G and P satisfy the Causal Markov Condition if every node X in V is independent of given [4]

Motivation[]

Statisticians are enormously interested in the ways in which certain events and variables are connected. The precise notion of what constitutes a cause and effect is necessary to understand the connections between them. The central idea behind the philosophical study of causation is that causes raise the probabilities of their effects, all else being equal.

A deterministic interpretation of causation means that if A causes B, then A must always be followed by B. In this sense, smoking does not cause cancer because some smokers never develop cancer.

On the other hand, a probabilistic interpretation simply means that causes raise the probability of their effects. In this sense, changes in meteorological readings associated with a storm do cause that storm, since they raise its probability. (However, simply looking at a barometer does not change the probability of the storm, for a more detailed analysis, see:[5]).

The looseness of the definition of probabilistic causation begs the question if events which are traditionally classified as effects (e.g. a wet piece of paper after spilling water on it) can actually make a difference to the probability of their causes. In a world without CM, the wetness of a piece of paper changes the probability that a glass of water was spilled on it. In a world with CM, only events which are parents of an event change its probability (e.g. gravity, a hand passing by the water glass, the nearness of the paper).

Implications[]

Dependence and Causation[]

It follows from the definition that if X and Y are in V and are probabilistically dependent, then either X causes Y, Y causes X, or X and Y are both effects of some common cause Z in V.[3]

Screening[]

It once again follows from the definition that the parents of X screen X from other "indirect causes" of X (parents of Parents(X)) and other effects of Parents(X) which are not also effects of X.[3]

Examples[]

In a simple view, releasing one's hand from a hammer causes the hammer to fall. However, doing so in outer space does not produce the same outcome, calling into question if releasing one's fingers from a hammer always causes it to fall.

A causal graph could be created to acknowledge that both the presence of gravity and the release of the hammer contribute to its falling. However, it would be very surprising if the surface underneath the hammer affected its falling. This essentially states the Causal Markov Condition, that given the existence of gravity the release of the hammer, it will fall regardless of what is beneath it.

Notes[]

  1. ^ Geiger, Dan; Pearl, Judea (1990). "On the Logic of Causal Models". Machine Intelligence and Pattern Recognition. 9: 3–14. doi:10.1016/b978-0-444-88650-7.50006-8.
  2. ^ Lauritzen, S. L.; Dawid, A. P.; Larsen, B. N.; Leimer, H.-G. (August 1990). "Independence properties of directed markov fields". Networks. 20 (5): 491–505. doi:10.1002/net.3230200503.
  3. ^ a b c Hausman, D.M.; Woodward, J. (December 1999). "Independence, Invariance, and the Causal Markov Condition" (PDF). British Journal for the Philosophy of Science. 50 (4): 521–583. doi:10.1093/bjps/50.4.521.
  4. ^ Spirtes, Peter; Glymour, Clark; Scheines, Richard (1993). Causation, Prediction, and Search. Lecture Notes in Statistics. Vol. 81. New York, NY: Springer New York. doi:10.1007/978-1-4612-2748-9. ISBN 9781461276500.
  5. ^ Pearl, Judea (2009). Causality. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511803161. ISBN 9780511803161.
Retrieved from ""