Bayesian game

From Wikipedia, the free encyclopedia

In game theory, a Bayesian game is a game that models the outcome of player interactions using aspects of Bayesian probability. Bayesian games are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

Hungarian economist John C. Harsanyi introduced the concept of Bayesian games in three papers from 1967 and 1968:[1][2][3] He was awarded the Nobel Prize for these and other contributions to game theory in 1994. Roughly speaking, Harsanyi defined Bayesian games in the following way: players are assigned by nature at the start of the game a set of characteristics. By mapping probability distributions to these characteristics and by calculating the outcome of the game using Bayesian probability, the result is a game whose solution is, for technical reasons, far easier to calculate than a similar game in a non-Bayesian context. For those technical reasons, see the Specification of games section in this article.

Specification of games[]

Technical Definition[]

In a Bayesian game, one has to specify strategy spaces, type spaces, payoff functions and prior beliefs. A strategy for a player is a complete plan of action that covers every contingency that might arise for every type that player might be. A type space for a player is just the set of all possible types of that player: the beliefs of a player describe the uncertainty of that player of the types of the other players (for example, does player A believe player B to be a hawk or a dove?). The payoff function describes the value attached to specific outcomes of a game by a player. And prior beliefs describe the beliefs that players have of other players at the start of the game.

Let denote the set of all probability distributions on a set . A Bayesian game is[4] a tuple where

  1. is a set of players
  2. is a set of actions for player
  3. is a set of types for player
  4. is a joint distribution of types
  5. is a payoff function for player

A pure strategy for player is a function . A mixed strategy for player is a function . Note that a player's strategy depends only on their own type.

A strategy profile is a strategy for each player. A strategy profile determines expected payoffs for each player, where the expectation is taken over both the type profile and the randomization over actions contained by the mixed strategy profile .

A player's payoff can depend on other players' types. If depends only on but not , the game is sometimes said to have private values.[5]

Improvements from Non-Bayesian Games[]

There are two important and novel aspects to Bayesian games that were themselves specified by Harsanyi.[6] The first is that Bayesian games should be considered and structured identically to complete information games except by attaching probability to the game, the final game functions as though it were an incomplete information game. This accomplishes the following goals: the players can be essentially modeled as having incomplete information and the probability space of the game still follows the law of total probability. Second, Bayesian games are useful in that they do not require infinite sequential calculations. Infinite sequential calculations would arise where players (essentially) try to "get into each other's heads" by asking questions like "If I expect some action from player B, then player B will anticipate that I expect that action, so then I should anticipate that anticipation" ad infinitum. Bayesian games allows for the calculation of these outcomes in one move by simultaneously assigning different probability weights to different outcomes. The effect of this is that Bayesian games allow for the modeling of a number of games that in a non-Bayesian setting would be irrational to compute.

Bayesian Nash equilibrium[]

In a non-Bayesian game, a strategy profile is a Nash equilibrium if every strategy in that profile is a best response to every other strategy in the profile; i.e., there is no strategy that a player could play that would yield a higher payoff, given all the strategies played by the other players.

An analogous concept can be defined for a Bayesian game, the difference being that every player's strategy maximizes his expected payoff given his beliefs about the state of nature. A player's beliefs about the state of nature are formed by conditioning the prior probabilities on his own type according to Bayes' rule.

A Bayesian Nash equilibrium (BNE) is defined as a strategy profile that maximizes the expected payoff for each player given their beliefs and given the strategies played by the other players. That is, a strategy profile is a Bayesian Nash equilibrium if and only if for every player keeping the strategies of every other player fixed, strategy maximizes the expected payoff of player according to his beliefs.[4]

For finite Bayesian games, i.e., both the action and the type space are finite, there are two equivalent representations. The first is called the agent-form game (see Theorem 9.51 of the Game Theory book[7]) which expands the number of players from to , i.e., every type of each player becomes a player. The second is called the induced normal form (see Section 6.3.3 of Multiagent Systems[8]) which still has players yet expands the number of each player i's actions from to , i.e., the pure policy is a combination of actions the player should take for different types. We can compute Nash Equilibrium (NE) in these two equivalent representations and then recover the BNE from the NE.

  • If we further consider two players with a zero-sum objective function, then we can form a linear program to compute BNE.[9]
  • If we further consider two players with a general-sum objective function, then we can form a math program of a bi-linear objective function and linear constrains to compute BNE (see Theorem 1 of the paper[10]). Although no nonlinear solver can guarantee the global optimal, we can check whether a BNE is achieved by looking at the value of the program's objective function, which should equal 0 at a BNE. Thus, the value closer to 0 represents a smaller error, which can serve as a metric of the solution quality. We can directly extend the above math programming method to N-player games.

Variants of Bayesian equilibrium[]

Perfect Bayesian equilibrium[]

Bayesian Nash equilibrium can result in implausible equilibria in dynamic games, where players move sequentially rather than simultaneously. As in games of complete information, these can arise via non-credible strategies off the equilibrium path. In games of incomplete information there is also the additional possibility of non-credible beliefs.

To deal with these issues, Perfect Bayesian equilibrium, in the spirit of subgame perfect equilibrium requires that, starting from any information set, subsequent play be optimal. Furthermore, it requires that beliefs be updated consistently with Bayes' rule on every path of play that occurs with positive probability.

Stochastic Bayesian games[]

The definition of Bayesian games has been combined with stochastic games to allow for environment states (e.g. physical world states) and stochastic transitions between states.[11] The resulting "stochastic Bayesian game" model is solved via a recursive combination of the Bayesian Nash equilibrium and the Bellman optimality equation.

Incomplete information over collective agency[]

The definition of Bayesian games and Bayesian equilibrium has been extended to deal with collective agency. One approach is to continue to treat individual players as reasoning in isolation, but to allow them, with some probability, to reason from the perspective of a collective.[12] Another approach is to assume that players within any collective agent know that the agent exists, but that other players do not know this, although they suspect it with some probability.[13] For example, Alice and Bob may sometimes optimize as individuals and sometimes collude as a team, depending on the state of nature, but other players may not know which of these is the case.

Example[]

Sheriff's Dilemma[]

A sheriff faces an armed suspect. Both must simultaneously decide whether to shoot the other or not.

The suspect can either be of type "criminal" or type "civilian". The sheriff has only one type. The suspect knows its type and the Sheriff's type, but the Sheriff does not know the suspect's type. Thus, there is incomplete information (because the suspect has private information), making it a Bayesian game. There is a probability p that the suspect is a criminal, and a probability 1-p that the suspect is a civilian; both players are aware of this probability (common prior assumption, which can be converted into a complete-information game with imperfect information).

The sheriff would rather defend himself and shoot if the suspect shoots, or not shoot if the suspect does not (even if the suspect is a criminal). The suspect would rather shoot if he is a criminal, even if the sheriff does not shoot, but would rather not shoot if he is a civilian, even if the sheriff shoots. Thus, the payoff matrix of this Normal-form game for both players depends on the type of the suspect. It is assumed that payoffs are given as follows:

 
Type = "Civilian" Sheriff's action
Shoot Not
Suspect's action Shoot -3, -1 -1, -2
Not -2, -1 0, 0
 
Type = "Criminal" Sheriff's action
Shoot Not
Suspect's action Shoot 0, 0 2, -2
Not -2, -1 -1,1

If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. ad infinitumcommon knowledge), play in the game will be as follows according to perfect Bayesian equilibrium:[14][15]

When the type is "civilian", the dominant strategy for the suspect is not to shoot, and when the type is "criminal", the dominant strategy for the suspect is to shoot; alternative strictly dominated strategy can thus be removed. Given this, if the sheriff shoots, he will have a payoff of 0 with probability p and a payoff of -1 with probability 1-p, i.e. an expected payoff of p-1; if the sheriff does not shoot, he will have a payoff of -2 with probability p and a payoff of 0 with probability 1-p, i.e. an expected payoff of -2p. Thus, the Sheriff will always shoot if p-1 > -2p, i.e. when p > 1/3.

See also[]

References[]

  1. ^ Harsanyi, John C., 1967/1968. "Games with Incomplete Information Played by Bayesian Players, I-III." Management Science 14 (3): 159-183 (Part I), 14 (5): 320-334 (Part II), 14 (7): 486-502 (Part III).
  2. ^ Harsanyi, John C. (1968). "Games with Incomplete Information Played by "Bayesian" Players, I-III. Part II. Bayesian Equilibrium Points". Management Science. 14 (5): 320–334. ISSN 0025-1909.
  3. ^ Harsanyi, John C. (1968). "Games with Incomplete Information Played by "Bayesian" Players, I-III. Part III. The Basic Probability Distribution of the Game". Management Science. 14 (7): 486–502. ISSN 0025-1909.
  4. ^ a b Kajii, A.; Morris, S. (1997). "The Robustness of Equilibria to Incomplete Information". Econometrica. 65 (6): 1283–1309. doi:10.2307/2171737.
  5. ^ Levin, Jonathan (February 2002). "Games of Incomplete Information" (PDF). Stanford Graduate School of Business. Stanford University. Retrieved March 26, 2022. Note that payoffs can depend not only on your own type, but on your rivals' types. If depends on , but not on we sometimes say the game has private values.
  6. ^ Harsanyi, John C. (2004). "Games with Incomplete Information Played by "Bayesian" Players, I-III: Part I. The Basic Model". Management Science. 50 (12): 1804–1817. ISSN 0025-1909.
  7. ^ Maschler, Michael; Solan, Eilon; Zamir, Shmuel (2013). Game Theory. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511794216. ISBN 978-0-511-79421-6.
  8. ^ Shoham, Yoav; Leyton-Brown, Kevin (2008). Multiagent Systems. Cambridge: Cambridge University Press. ISBN 978-0-511-81165-4.
  9. ^ Ponssard, J. -P.; Sorin, S. (June 1980). "The LP formulation of finite zero-sum games with incomplete information". International Journal of Game Theory. 9 (2): 99–105. doi:10.1007/bf01769767. ISSN 0020-7276.
  10. ^ Huang, Linan; Zhu, Quanyan (2020-02-01). "A dynamic games approach to proactive defense strategies against Advanced Persistent Threats in cyber-physical systems". Computers & Security. 89: 101660. arXiv:1906.09687. doi:10.1016/j.cose.2019.101660. ISSN 0167-4048.
  11. ^ Albrecht, Stefano; Crandall, Jacob; Ramamoorthy, Subramanian (2016). "Belief and Truth in Hypothesised Behaviours". Artificial Intelligence. 235: 63–94. arXiv:1507.07688. doi:10.1016/j.artint.2016.02.004.
  12. ^ Bacharach, M. (1999). "Interactive team reasoning: A contribution to the theory of cooperation". Research in Economics. 53: 117–47. doi:10.1006/reec.1999.0188.
  13. ^ Newton, J. (2019). "Agency equilibrium". Games. 10 (1). doi:10.3390/g10010014.
  14. ^ "Coursera". Coursera. Retrieved 2016-06-16.
  15. ^ Hu, Yuhuang; Loo, Chu Kiong (2014-03-17). "A Generalized Quantum-Inspired Decision Making Model for Intelligent Agent". The Scientific World Journal. 2014. doi:10.1155/2014/240983. ISSN 1537-744X. PMC 3977121. PMID 24778580.

Further reading[]

Retrieved from ""