In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game.
Inversely, in a game with incomplete information, players do not possess full information about their opponents. Some players possess private information, a fact that the others should take into account when forming expectations about how those players will behave. A typical example is an auction: each player knows their own utility function (valuation for the item), but does not know the utility function of the other players.[1]
Games of incomplete information arise frequently in social science. For instance, John Harsanyi was motivated by consideration of arms control negotiations, where the players may be uncertain both of the capabilities of their opponents and of their desires and beliefs.
It is often assumed that the players have some statistical information about the other players, e.g. in an auction, each player knows that the valuations of the other players are drawn from some probability distribution. In this case, the game is called a Bayesian game.
In games that have a varying degree of complete information and game type, there are different methods available to the player to solve the game based on this information. In games with static, complete information, the approach to solve is to use Nash equilibrium to find viable strategies. In dynamic games with complete information, backward induction is the solution concept, which eliminates non-credible threats as potential strategies for players.
A classic example of a dynamic game with complete information is Stackelberg's (1934) sequential-move version of Cournot duopoly. Other examples include Leontief's (1946) monopoly-union model and Rubenstein's bargaining model.[2]
Lastly, when complete information is unavailable (incomplete information games), these solutions turn towards Bayesian Nash Equilibria since games with incomplete information become Bayesian games. In a game of complete information, the players' payoffs functions are common knowledge, whereas in a game of incomplete information at least one player is uncertain about another player's payoff function.
The extensive form can be used to visualize the concept of complete information. By definition, players know where they are as depicted by the nodes, and the final outcomes as illustrated by the utility payoffs. The players also understand the potential strategies of each player and as a result their own best course of action to maximize their payoffs.
Complete information is importantly different from perfect information.
In a game of complete information, the structure of the game and the payoff functions of the players are commonly known but players may not see all of the moves made by other players (for instance, the initial placement of ships in Battleship); there may also be a chance element (as in most card games). Conversely, in games of perfect information, every player observes other players' moves, but may lack some information on others' payoffs, or on the structure of the game.[3] A game with complete information may or may not have perfect information, and vice versa.