Continuing on from last week’s post on the different strategies in game theory, I will talk about the most important equilibrium in game theory: the Nash Equilibrium. (I am still referencing Essentials of Game Theory). I must point out that the Nash Equilibrium only gives the most optimal solution profile in **non-cooperative** games. In cooperative games, the optimal payoff for all players may be greater than that predicted by the Nash Equilibrium. This is the case because the Nash Equilibrium finds the most optimal strategy profile for a player given that the player knows nothing about the other players’ strategies. In other words, the player has to be prepared for the worst possible outcome and be ready to pick a strategy (pure or mixed) that provides the best possible outcome in all scenarios.

**Nash Equilibrium definition:**

To make what I said in words clearer and more mathematically rigorous, let me define some variables. As with my previous posts, let’s define to be player ‘s strategy. Then let to be the strategy profile without . In other words, . A strategy profile is a Nash Equilibrium if for all players, , their chosen strategy, , allows them to each gain a utility, where for all . Put simply, a strategy profile is a Nash Equilibrium if every player choses a strategy that puts them in the best position possible given a competitive game where every player is trying to obtain the most optimal utility. Note that a player’s strategy in a Nash Equilibrium may be either mixed or pure.**Prisoner’s Dilemma:**

Let’s look at a couple of examples of Nash Equilibria in games. The book provides a neat example of pure and mixed strategy Nash Equilibria in a game called the Battle of the Sexes. However, I’ll talk about another example called the Prisoner’s Dilemma to distinguish between the two different kinds of equilibria more clearly. To see a description of the Prisoner’s Dilemma, look at its Wikipedia page, which provides a very good general overview. In the example of the Prisoner’s Dilemma below (Figure 1), the only pure Nash Equilibrium that exists is “Defect, Defect.” You may argue that the payoff is maximal for both player if they both cooperate. However, “Cooperate, Cooperate” is**not**a Nash Equilibrium because both players given that the other player chooses to “cooperate” can improve their payoff if they switch their strategy to “defect.” Thus, given and where , for all . This is because if player chooses to cooperate and the other player also chooses to cooperate, then . However, player could easily improve her position if she chooses to defect . Thus, in the case that and and . The only Nash Equilibria is “Defect, Defect” because neither player may improve his or her position when this is the case.**Mixed Strategy Nash Equilibria:**

So far, we have only looked at an example of a pure strategy Nash Equilibrium. However, mixed strategy Nash Equilibria also exist in many games. In the Prisoner’s Dilemma, there is no other mixed strategy besides the “Defect, Defect” strategy previously mentioned. Take my word for now, and I will offer a more rigorous explanation at the end of this section. For now, let’s consider another game that does have a mixed strategy Nash Equilibrium. The game we will now look at is called the Matching Pennies game (Figure 2). Computing a mixed strategy equilibrium is generally a very complicated task (which I will attempt to explain in my next blog post). However, in this case, we know the strategy profiles of both players, and we know that there are only two players in the game. Let’s suppose player randomizes her actions by turning the penny to heads with probability and to tails with probability . Thus, given that player turns the penny to either side randomly, player must be indifferent between heads or tails given that he has no idea what player will do. Given these two pieces of information, we know that to satisfy the Nash Equilibrium principle, the utility player receives for either action must be the same. We, therefore, get the following equations and result:

.If player plays heads with 50% probability, player will be indifferent between playing either heads or tails. This logic also applies in reverse order where player plays heads with probability 50%. If both players play heads with 50% probability, then both players will be indifferent between playing either heads or tails. This constitutes as a Nash Equilibrium because there is no other move that the players can make to maximize their utility given that they do not know the other player’s strategy.

**Explanation of the lack of a mixed strategy equilibrium in the Prisoner’s Dilemma:**Going back to the assumption I proposed in the beginning of this section, the Prisoner’s Dilemma has no other Nash Equilibrium besides “Defect, Defect.” In other words, there is no mixed strategy equilibrium in the Prisoner’s Dilemma*. Using the same technique described above, assume prisoner chooses to cooperate with probability and defect with probability . Then, prisoner would be indifferent between either cooperating or defecting. Thus, writing out the equations, we obtain:

Wait a minute, probability can never be greater than one, but . This is an invalid result, so we conclude that there is no mixed strategy that will make player indifferent between cooperating and defecting. Thus, the only valid Nash Equilibrium is “Defect, Defect” in the Prisoner’s Dilemma. (For extra practice, think about the Prisoner’s Dilemma in the general case without any numbers.)

*Please note: I hesitate to use the phrase “no mixed strategy equilibrium” because a equilibrium where one strategy is played with 100% probability is still considered a mixed strategy. What I mean to say is that there are no

**pure mixed**strategy equilibrium, but I didn’t say it at the risk of confusion with pure strategy equilibriums.

At the end of this post, I leave you with this important theorem:

**(Nash, 1951): Every game with a finite number of players and action profiles has a Nash Equilibrium.**

Perhaps sometime this week or sometime next week, I’ll attempt to explain the proof for this theorem, as well as, how to find the mixed strategy of a game with more than two players (the general case). This might be the one post that breaks my promise of a simple and easy to understand applied math blog.