Ellsberg paradox

From Wikipedia, the free encyclopedia

The Ellsberg paradox is a paradox of choice in which people's decisions produce inconsistencies with subjective expected utility theory.[1] The paradox was popularized by Daniel Ellsberg in his 1961 paper “Risk, Ambiguity, and the Savage Axioms”, although a version of it was noted considerably earlier by John Maynard Keynes.[2] It is generally taken to be evidence for ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown risks.

Daniel Ellsberg 2006

Ellsberg's findings indicate that choices with an underlying level of risk are favoured in instances where the likelihood of risk is clear, rather than instances in which the likelihood of risk is unknown. A decision maker will overwhelmingly favour a choice with a transparent likelihood of risk, even in instances where the unknown alternative may produce a larger utility. Given a particular set of choices in which each choice carries known and varying levels of risk, people will still prefer choices with calculable risk, even in instances with a lower utility outcome.[3]

Experimental research[]

Ellsberg's experimental research involved two separate thought experiments: the 2-urn 2-color scenario and the 1 urn 3-colour scenario.

Two-urns paradox[]

There are two urns each containing 100 balls. It is known that urn A contains 50 red and 50 black, but urn B contains an unknown mix of red and black balls.

The following bets are offered to a participant:

Bet 1A: get $1 if red is drawn from urn A, $0 otherwise

Bet 2A: get $1 if black is drawn from urn A, $0 otherwise

Bet 1B: get $1 if red is drawn from urn B, $0 otherwise

Bet 2B: get $1 if black is drawn from urn B, $0 otherwise

Typically, participants were seen to be indifferent between bet 1A and bet 2A (consistent with expected utility theory), but were seen to strictly prefer Bet 1A to Bet 1B and Bet 2A to 2B. This result is generally interpreted to be a consequence of ambiguity aversion (also known as uncertainty aversion); people intrinsically dislike situations where they cannot attach probabilities to outcomes, in this case favouring the bet in which they know the probability and utility outcome (0.5 and $1 respectively).

One-urn paradox[]

There is one urn containing 90 balls: 30 balls are red, while the remaining 60 balls are either black or yellow in unknown proportions. The balls are well mixed so that each individual ball is as likely to be drawn as any other. The participants then make a choice within a gamble scenario:

Gamble A Gamble B
You receive $100 if you draw a red ball You receive $100 if you draw a black ball

Additionally, the participant may choose a separate gamble scenario within the same situational parameters:

Gamble C Gamble D
You receive $100 if you draw a red or yellow ball You receive $100 if you draw a black or yellow ball

The experimental conditions manufactured by Ellsberg serve to rely upon two economic principles: Knightian uncertainty, pertaining to the unquantifiable nature of the mix between both yellow and black balls within the single urn, and probability, of which red balls are drawn at 1/3 vs. 2/3.

Utility theory interpretation[]

Utility theory models the choice by assuming that in choosing between these gambles, people assume a probability that the non-red balls are yellow versus black, and then computes the expected utility of the two gambles individually.

Since the prizes are the same, it follows that you will prefer Gamble A to Gamble B if and only if you believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be no clear preference between the choices if you thought that a red ball was as likely as a black ball. Similarly it follows that you will prefer Gamble C to Gamble D if, and only if, you believe that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that, if drawing a red ball is more likely than drawing a black ball, then drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing you prefer Gamble A to Gamble B, it follows that you will also prefer Gamble C to Gamble D. Supposing instead that you prefer Gamble B to Gamble A, it follows that you will also prefer Gamble D to Gamble C.

Ellsberg's findings violate assumptions made within common Expected Utility Theory, with participants strictly preferring Gamble A to Gamble B and Gamble D to Gamble C.

Numerical demonstration[]

Mathematically, the estimated probabilities of each color ball can be represented as: R, Y, and B. If you strictly prefer Gamble A to Gamble B, by utility theory, it is presumed this preference is reflected by the expected utilities of the two gambles. We reach a contradiction in our utility calculations. This contradiction indicates that your preferences are inconsistent with expected-utility theory.

Generality of the paradox[]

The result holds regardless of your utility function. Indeed, the amount of the payoff is likewise irrelevant. Whichever gamble is selected, the prize for winning it is the same, and the cost of losing it is the same (no cost), so ultimately, there are only two outcomes: receive a specific amount of money, or receive nothing. Therefore, it is sufficient to assume that the preference is to receive some money to nothing (this assumption is not necessary: in the mathematical treatment above, it was assumed U($100) > U($0), but a contradiction can still be obtained for U($100) < U($0) and for U($100) = U($0)).

In addition, the result holds regardless of your risk aversion - all the gambles involve risk. By choosing Gamble D, you have a 1 in 3 chance of receiving nothing, and by choosing Gamble A, you have a 2 in 3 chance of receiving nothing. If Gamble A was less risky than Gamble B, it would follow[4] that Gamble C was less risky than Gamble D (and vice versa), so, risk is not averted in this way.

However, because the exact chances of winning are known for Gambles A and D, and not known for Gambles B and C, this can be taken as evidence for some sort of ambiguity aversion which cannot be accounted for in expected utility theory. It has been demonstrated that this phenomenon occurs only when the choice set permits comparison of the ambiguous proposition with a less vague proposition (but not when ambiguous propositions are evaluated in isolation).[5]

Possible explanations[]

There have been various attempts to provide decision-theoretic explanations of Ellsberg's observation. Since the probabilistic information available to the decision-maker is incomplete, these attempts sometimes focus on quantifying the non-probabilistic ambiguity which the decision-maker faces – see Knightian uncertainty. That is, these alternative approaches sometimes suppose that the agent formulates a subjective (though not necessarily Bayesian) probability for possible outcomes.

One such attempt is based on info-gap decision theory. The agent is told precise probabilities of some outcomes, though the practical meaning of the probability numbers is not entirely clear. For instance, in the gambles discussed above, the probability of a red ball is 30/90, which is a precise number. Nonetheless, the agent may not distinguish, intuitively, between this and, say, 30/91. No probability information whatsoever is provided regarding other outcomes, so the agent has very unclear subjective impressions of these probabilities.

In light of the ambiguity in the probabilities of the outcomes, the agent is unable to evaluate a precise expected utility. Consequently, a choice based on maximizing the expected utility is also impossible. The info-gap approach supposes that the agent implicitly formulates info-gap models for the subjectively uncertain probabilities. The agent then tries to satisfice the expected utility and to maximize the robustness against uncertainty in the imprecise probabilities. This robust-satisficing approach can be developed explicitly to show that the choices of decision-makers should display precisely the preference reversal which Ellsberg observed.[6]

Another possible explanation is that this type of game triggers a deceit aversion mechanism. Many humans naturally assume in real-world situations that if they are not told the probability of a certain event, it is to deceive them. Participants make the same decisions in the experiment as they would about related but not identical real-life problems where the experimenter would be likely to be a deceiver acting against the subject's interests. When faced with the choice between a red ball and a black ball, the probability of 30/90 is compared to the lower part of the 0/9060/90 range (the probability of getting a black ball). The average person expects there to be fewer black balls than yellow balls because in most real-world situations, it would be to the advantage of the experimenter to put fewer black balls in the urn when offering such a gamble. On the other hand, when offered a choice between red and yellow balls and black and yellow balls, people assume that there must be fewer than 30 yellow balls as would be necessary to deceive them. When making the decision, it is quite possible that people simply neglect to consider that the experimenter does not have a chance to modify the contents of the urn in between the draws. In real-life situations, even if the urn is not to be modified, people would be afraid of being deceived on that front as well.[7]

Decisions under uncertainty aversion[]

In order to describe how an individual would take decisions in a world where uncertainty aversion exists, modifications of the expected utility framework have been proposed. These include:

  • Choquet expected utility: Created by French mathematician Gustave Choquet was a subadditive integral used as a way of measuring expected utility in situations with unknown parameters. The mathematical principle is seen as a way in which the contradiction between rational choice theory, Expected utility theory and Ellsberg's seminal findings can be reconciled.
  • Maxmin expected utility: Axiomatized by Gilboa and Shmeidler [8] is a widely received alternative to utility maximisation, taking into account ambiguity-averse preferences. This model reconciles the notion that intuitive decisions may violate the ambiguity neutrality, established within both the Ellsberg Paradox and Allais Paradox.

Alternative explanations[]

Other alternative explanations include the competence hypothesis[9] and the comparative ignorance hypothesis.[5] Both theories attribute the source of the ambiguity aversion to the participant's pre-existing knowledge.

Daniel Ellsberg's 1962 paper, "Risk, Ambiguity and Decision"[]

Upon graduating in Economics from Harvard in 1952, Ellsberg left immediately to serve as a US Marine before coming back to Harvard in 1957 to complete his post-graduate studies on decision making under uncertainty.[10] Ellsberg left his graduate studies to join the RAND Corporation as a Strategic Analyst but maintained his academic work on the side. He presented his breakthrough paper pertaining to the Ellsberg paradox in the December meeting of the Econometric Society in St. Louis in 1960. The book built upon previous works of both J.M. Keynes and F.H Knight, challenging dominant 'Rational Decision Theory' at the time and extending the academic literature. The book was made public in 2001, some 40 years after being published, because of the Pentagon Papers scandal then encircling Ellsberg's life. The book is considered a highly-influential paper and is still considered influential within economic academia pertaining to risk ambiguity and uncertainty.

See also[]

References[]

  1. ^ Ellsberg, Daniel (1961). "Risk, Ambiguity, and the Savage Axioms" (PDF). Quarterly Journal of Economics. 75 (4): 643–669. doi:10.2307/1884324. JSTOR 1884324.
  2. ^ Keynes 1921, pp. 75–76, paragraph 315, footnote 2.
  3. ^ EconPort discussion of the paradox
  4. ^ Segal, Uzi (1987). "The Ellsberg Paradox and Risk Aversion: An Anticipated Utility Approach" (PDF). International Economic Review. 28 (1): 175–202. doi:10.2307/2526866. JSTOR 2526866.
  5. ^ a b Fox, Craig R.; Tversky, Amos (1995). "Ambiguity Aversion and Comparative Ignorance". Quarterly Journal of Economics. 110 (3): 585–603. CiteSeerX 10.1.1.395.8835. doi:10.2307/2946693. JSTOR 2946693.
  6. ^ Ben-Haim, Yakov (2006). Info-gap Decision Theory: Decisions Under Severe Uncertainty (2nd ed.). Academic Press. section 11.1. ISBN 978-0-12-373552-2.
  7. ^ Lima Filho, Roberto IRL (July 2, 2009). "Rationality Intertwined: Classical vs Institutional View": 5–6. doi:10.2139/ssrn.2389751. SSRN 2389751. Cite journal requires |journal= (help)
  8. ^ I. Gilboa and D. Schmeidler. Maxmin expected utility with non- unique prior. Journal of Mathematical Economics, 18(2):141–153, 1989.
  9. ^ Heath, Chip; Tversky, Amos (1991). "Preference and Belief: Ambiguity and Competence in Choice under Uncertainty". Journal of Risk and Uncertainty. 4: 5–28. CiteSeerX 10.1.1.138.6159. doi:10.1007/bf00057884. S2CID 146410959.
  10. ^ Yasuhiro Sakai, Daniel Ellsberg on J.M. Keynes and F.H. Knight: risk ambiguity and uncertainty. Evolutionary and institutional Economics Review. 2018. (16): 1-18

Further reading[]

Retrieved from ""