School  Troops  Resources 
Game Theory has emerged recently as a powerful challenger to the conventional method of examining economics. Although many illustrious predecessors worked on problems in what can be called "game theory", the fundamental, formal conception of game theory as part and parcel of economic theory were first organized in John von Neumann and Oskar Morgenstern's 1944 classic, Theory of Games and Economic Behavior (1944).
The main purpose of game theory is to consider situations where instead of agents making decisions as reactions to exogenous prices ("dead variables"), their decisions are strategic reactions to other agents actions ("live variables"). An agent is faced with a set of moves he can play and will form a strategy, a best response to his environment, which he will play by. Strategies can be either "pure" (i.e. play a particular move) or "mixed" (random play). A " Nash Equilibrium" will be reached when each agent's actions begets a reaction by all the other agents which, in turn, begets the same initial action. In other words, the best responses of all players are in accordance with each other.
Game Theory can be roughly divided into two broad areas: noncooperative (or strategic) games and cooperative (or coalitional) games. The meaning of these terms are self evident, although John Nash claimed that one should be able to reduce all cooperative games into some noncooperative form. This position is what is known as the "Nash Programme". Within the noncooperative literature, a distinction can also be made between "normal" form games (static) and "extensive" form games (dynamic).
John von Neumann and Oskar Morgenstern (1944) introduced the strategic normal game, strategic extensive game, the concept of pure/mixed strategies, coalitional games as well as the axiomatization of expected utility theory, which was so useful for economics under uncertainty. They employed the "maximin" solution concept derived earlier by John von Neumann (1928) to solve simple strategic, zerosum normal games.
In 1950, John Nash introduced the concept of a "Nash Equilibrium" (NE), which became the organizing concept under Game Theory  even though the concept actually stretched as far back as Cournot (1838). Nash followed this up in 1951 with the concept of a "Nash Bargaining Solution" (NBS) for coalitional games.
Then the floodgates opened for the refinement of Nash Equilibrium. In the field of noncooperative games, R. Duncan Luce and Howard Raiffa (1957) provided the first popular textbook on game theory and, in it, they formalized the idea of the Iterated Elimination of Dominated Strategies (IEDS) method for Strategic Normal Games and introduced the concept of "Repeated Game" (static games which are played several times over). H.W. Kuhn (1953) introduced extensive games with "imperfect information" (i.e. where one does not know what moves have already been played by other players). William Vickrey (1961) provided the first formalization of "auctions". Reinhard Selten (1965) developed the concept of a "Subgame Perfect Equilibrium" (SPE) (i.e. elimination by backward induction) as a refined solution for extensive form games. John C. Harsanyi (19678) developed the concept of a "Bayesian Nash Equilibrium" (BNE) for Bayesian games (i.e. games with incomplete information  where there is some uncertainty surrounding moves, or where "nature" plays as well.)
In coalitional (cooperative) games further refinements also occurred. Lloyd Shapley (1953) introduced the concept of the "Shapley Value" and the "Core" (which had been originally conceived by F.Y. Edgeworth (1881)) as solutions to Coalitional Games. Throughout the early 1960s, Robert J. Aumann and Martin Shubik began to apply cooperative game theory extensively throughout economics (e.g. industrial organization, general equilibrium, monetary theory, etc.), and, in the process, went on to invent several solution concepts for coalitional games (e.g. Bargaining Set, Strong Equilibrium), "large games" with infinite players and early statements of the "Folk Theorems" (solution concepts for Repeated Games). David Schmeidler (1969) developed the "Nucleolus" solution for coalitional games.
Further developments emerged in the 1970s. John C. Harsanyi (1973) provided a remarkably insightful new interpretation of the concept of a "mixed strategy". Robert J. Aumann (1974) defined "Correlated Equilibrium" for Bayesian Games while Reinhard Selten (1975) introduced "Trembling Hand Equilibrium" for Bayesian Games. Further definitions came: Robert J. Aumann (1976) formally defined the concept of "Common Knowledge", opening a floodgate of literature, while B.D. Bernheim and D.G. Pearce (1984) formalized the concept of "rationalizability".
Advancements continued apace: David M. Kreps and Robert Wilson (1982) introduced the concept of "Sequential Equilibrium" (SEQE) for extensive games with imperfect information. Ariel Rubinstein (1982), following an early insight by Frederik Zeuthen (1930), transformed the cooperative Nash Bargaining Solution (NBS) into a noncooperative strategic extensive game of sequential bargaining. Elon Kohlberg and JeanFrançois Mertens (1986) developed the concept of "Forward Induction" for extensive games. Drew Fudenberg and E.S. Maskin (1986) developed one of the more famous "Perfect Folk Theorems" for infinitely repeated games. Finally, J.C. Harsanyi and R.Selten (1988) developed the idea of "equilibrium selection" for any type of game while D. Fudenberg and Jean Tirole (1991) developed the "Bayesian Perfect Equilibrium" (BPE) for Extensive Bayesian Games .
Evolutionary game theory started its development slightly later. Its objective is to apply the concepts of noncooperative game theory to explain such phenomena which are often thought to be the result of cooperation or human design  i.e. "institutions" and "conventions" such as market formation, price mechanisms, social rules of conduct, money and credit, etc. One of the earliest exponents of the theory of evolutionary games was Thomas C. Schelling (1960, 1981) who argued that apparently "cooperative" social institutions (in this case, settlements to conflicts) are maintained by essentially by "threats" of punishment and retaliation This has been followed up particularly in the 1990s..
Several Nobel Prizes have been awarded to some of major figures of Game Theory: the Nobel was shared by John Nash, J.C. Harsanyi and R. Selten in 1994 and William Vickrey and James Mirrlees in 1996. Herbert Simon won the Nobel in 1979 for concepts (e.g. bounded rationality) which have since been incorporated into the corpus of (Evolutionary) Game Theory.
Predecessors
Pioneers
The Modern Generation in Game Theory
Evolutionary Games

HET

Resources on Game Theory

All rights reserved, Gonçalo L. Fonseca