Dominance and Nash Equilibrium
Prof. John Patty
1
A Little Notation
Fix a game γ = (N, A, v).
For any player i ∈ N , let
A−i ≡ ⨉ Aj
j≠i
denote the set of all action profiles for all players other than i.
We need this because non-cooperative game theory assumes all players
make their choices “on their own.”
(This is “the big difference” between cooperative and non-cooperative game
theory)
So . . . A−i denotes all of the things that “i’s opponents might do”
2
Strictly Dominated Strategies
An action ai ∈ Ai is strictly dominated by bi ∈ Ai if
vi(bi; a−i) > v(ai; a−i) for all a−i ∈ A−i.
Simply put, bi always yields player i a strictly higher payoff than ai.
When this is the case, there is no reason for player i to ever choose ai.
3
Strictly Dominant Strategies
An action ai ∈ Ai is strictly dominant if
vi(ai; a−i) > v(bi; a−i) for all a−i ∈ A−i and all bi ∈ Ai.
It is rare for a player to have a strictly dominant strategy in any but the
smallest of games.
To understand this, note that ai is strictly dominant for player i if and only
if every other strategy for i is strictly dominated.
4
Strictly Dominant Strategy Equilibrium
An action profile a∗ = {a∗i }i∈N is a strictly dominant strategy equilibrium if
a∗i is a strictly dominant action for each player i ∈ N .
Note: Such an equilibrium will exist only if every player i ∈ N has a strictly
dominant strategy.
Such games are often not very interesting (but some, like the Prisoner’s
Dilemma, are interesting).
5
Dominant Strategy Equilibrium Example: Prisoner’s Dilemma
C D
C 5, 5 0, 7
D 7, 0 2, 2
v1(C; C) = 5 < 7 = v1(D; C),
v1(C; D) = 0 < 2 = v1(D; D),
v2(C; C) = 5 < 7 = v2(D; C),
v2(C; D) = 0 < 2 = v2(D; D).
C is strictly dominated for both players.
This implies that D is strictly dominant for both players.
Thus, a∗ = (a∗1, a∗2) = (D, D) is the strictly dominant strategy equilibrium.
6
Weakly Dominated Strategies
An action ai ∈ Ai is weakly dominated by bi ∈ Ai if
vi(bi; a−i) ≥ v(ai; a−i) for all a−i ∈ A−i, and
vi(bi; ã−i) > v(ai; ã−i) for some ã−i ∈ A−i.
Simply put, bi never does worse, and sometimes does better, for i than ai
In this case, there’s no “strict” reason for player i to ever choose ai.
The argument for eliminating weakly dominated strategies is, ahem, weaker
than for strictly dominated ones.
7
Iterated Elimination of Strictly Dominated Strategies (IESDS)
IESDS works as follows.
Number the players 1,2,. . . , n (it doesn’t matter how) and then
1. Eliminate all of player 1’s strictly dominated strategies (if any).
2. Label the remaining actions A1 1 ⊆ A1 .
3. Create a new “smaller” game, γ11 = (N, A1 1 × A−1 , v).
4. Proceed to Player 2, do same thing for γ11, create a “smaller” γ21 . . .
5. After Player n, proceed back to Player 1 and repeat. . .
6. When γtn = γt1, you’re done.
7. For a finite game, this process will stop at finite t.
8
L R
IESDS: An Example T 9, 1 1, 8
B 0, 0 2, 1
Player 1 (A1 = {T, B}):
9 = v1(T ∣ a2 = L) > v1(B ∣ a2 = L) = 0,
0 = v1(T ∣ a2 = R) < v1(B ∣ a2 = R) = 2,
Neither T nor B is strictly dominated for P1. Player 2 (A2 = {L, R}):
1 = v1(L ∣ a2 = T ) < v2(R ∣ a1 = T ) = 8,
0 = v2(L ∣ a2 = B) < v2(R ∣ a2 = T ) = 1,
⇒ L is strictly dominated by R for P2.
So, “eliminate a2 = L from the game” as follows:
9
IESDS: An Example, Continued After realizing that P2 will not play a2 =
R
L, P1 should evaluate the game as follows: T 1, 8 Thus, if P1 believes
B 2, 1
that P2 is choosing a2 to maximize P2’s payoff, should choose a1 = B:
Because a1 = T is strictly dominated after a2 = L is eliminated.
The unique strategy profile that survives IESDS in this game is
a∗ = (a∗1, a∗2) = (B, R).
10
Iterated Elimination of Weakly Dominated Strategies (IEWDS)
IEWDS is just like IESDS, eliminating weakly dominated strategies instead.
But the order of the players can matter. Consider the following:
L R
T 1, 1 1, 1
B 1, 1 0, 0
● If “we start elimination” with P1, then {(T, L), (T, R)} survive IEWDS,
● If “we start elimination” with P2, then {(T, L), (B, L)} survive IEWDS,
Thus, the set of profiles that survive IEWDS is not always uniquely defined.
In fact, IEWDS can “eliminate” some Nash equilibria.
This is not a desirable property.
11
Common knowledge.
As mentioned earlier, we typically assume that players are not only rational,
but that this rationality is common knowledge to the players
An event E is common knowledge if
● Each player knows E,
● Each player knows that every other player knows E,
● Each player knows that every other player knows that each player
knows E,
● . . . and so forth.
This can get pretty complicated pretty quickly—and we’ll see why we gen-
erally “need” common knowledge later in the course.
12
Best responses
Many games do not have any dominated strategies for any players. Con-
sider the following two canonical 2x2 games:
● Pure Coordination Game
L R
L 1, 1 0, 0
R 0, 0 1, 1
The players want to choose the “same strategy”
(note: L and R are simply labels)
● Matching pennies
H T
H 1, −1 −1, 1
T −1, 1 1, −1
P1 wants to choose what P2 chose, but
P2 wants to choose what P1 did not choose.
13
Rationalizability
What is the “most we can say” about what will happen if we only known the
players’ payoffs and that their rationality is common knowledge?
This is captured by the notion of rationalizability.
It has been shown that this is the same as IESDS.
This is not that useful from a practical standpoint, but it tells us a lot about
what it is reasonable for more detailed solution concepts that assume that
“rationality” is common knowledge.
Namely, multiplicity is not ruled out by common knowledge of rationality.
(This is seen in the Pure Coordination Game above.)
14
Pause for Questions/Discussion.
15
The Set of Pure Strategies
Consider any static game of complete information, γ = (N, A, v).
For each player i ∈ N , the set of pure strategies for i is denoted by
Si ≡ Ai ,
and S ≡ ⨉i∈N Si is the set of strategy profiles.
A strategy is a complete description of “how to play the game.”
In static games of complete information, this is simply the choice of action.
Nash equilibria are defined in terms of strategies, rather than actions.
16
Nash Equilibrium in Pure Strategies
A strategy profile s∗ is a (pure strategy) Nash equilibrium if, for all i ∈ N ,
vi(s∗i , s∗−i) ≥ vi(s′i, s∗−i) for all s′i ∈ Si.
In words, a (pure strategy) Nash equilibrium involves all players choosing
optimally given the other players’ choices.
(Note: Tadelis refers to this as the definition of Nash equilibrium, but we
will soon see that it is only a subset of Nash equilibria.)
17
Best Responses in Pure Strategies
Fix any profile of other players’ pure strategies, si.
Then the set of best responses for player i to s−i is denoted by
BRi(s−i) = argmax v(si; s−i).
si ∈Si
L C R
T 4 ,1 1, 1 2, 3
M 2, 2 2, 3 2, 3
B 1, 1 0, 0 5, 5
Example with Best Responses Marked
18
Correspondences, Briefly
Take any two sets, X and Y .
A function f ∶ X → Y maps each x ∈ X into a single element, f (x) ∈ Y .
A correspondence g ∶ X ⇉ Y maps each x ∈ X into a subset, g(x) ⊆ Y .
19
Correspondences, Briefly
In the example above, BR1 is a function, and BR2 is a correspondence:
⎧
⎪
⎪
⎪ T if s2 = L,
⎪
⎪
BR1(s2) = ⎨M if s2 = C,
⎪
⎪
⎪
⎪
⎪ B if s2 = R,
⎩
and
⎧
⎪
⎪
⎪ R if s1 = T,
⎪
⎪
BR2(s1) = ⎨{C, R} if s1 = M,
⎪
⎪
⎪
⎪
⎪ R if s1 = B.
⎩
(Note: All functions are correspondences, but the converse does not hold.)
20
Nash Equilibrium and Best Responses
The best response correspondence for γ = (N, A, v) is defined as
BR = ⨉ BRi
i∈N
and note that BR maps S ≡ ⨉i∈N Si into itself:
BR ∶ S ⇉ S.
Thus, “s∗ is a Nash Equilibrium” ⇔ “s∗ is a fixed point of BR”:
s∗ = BR(s∗).
21
Picturing “Fixed Points”: Uniqueness & Multiplicity
1 1
y=x x***=f(x***) y=x
x*=f(x*) x**=f(x**)
f(x) f(x)
x*=f(x*)
0 0
0 1 0 1
x* x* x** x***
Both of these cases can each easily occur
But multiplicity leaves more questions to be answered)
22
Picturing “Fixed Points”: Tangency & Nonexistence
1 1
y=x y=x
f(x)
x*=f(x**)
f(x)
f(x)
x*=f(x*)
0 0
0 1 0 1
x* x**
Left: Tangency (knife-edge, not robust)
Right: Nonexistence (often robust & “not knife-edge”)
23
Nash Equilibrium Example: Stag Hunt
S H
S 5, 5 0, 4
H 4, 0 3, 3
Two pure strategy Nash equilibria: (S, S) and (H, H):
5 = v1(S ∣ a2 = S) ≥ v1(H ∣ a2 = S) = 4,
5 = v2(S ∣ a1 = S) ≥ v2(H ∣ a1 = S) = 4,
3 = v1(H ∣ a2 = H) ≥ v1(S ∣ a2 = H) = 0,
3 = v2(H ∣ a1 = H) ≥ v2(S ∣ a1 = H) = 0,
24
Stag Hunt Equilibria & Beliefs
Which of the two pure strategy Nash equilibria “should occur” depends on
the players’ beliefs:
S H
S 5, 5 0, 4
H 4, 0 3, 3
Each equilibrium is “self-enforcing” but not clear which one should occur
(from the perspective of Nash Equilibrium)
But . . . one of the equilibria is clearly “better” (e.g., according to Pareto)
This leads to the question of equilibrium selection.
25
Nash Equilibrium: Equilibrium Selection (Not in Book)
Harsanyi & Selten (1988)
● Payoff-dominance (S, S in the stag hunt)
– The Nash equilibrium that Pareto dominates other equilibria
– In 2x2 games, there is one “generically”
● Risk-dominance (H, H in the stag hunt)
– The Nash equilibrium that has the largest “basin of attraction”
– In 2x2 games, this is the equilibrium that corresponds to each
player’s best response to a “50/50” random player
26
Nash Equilibrium Example: Tragedy of the Commons
N = {1, 2, . . . , n}
Ai = [0, ∞)
Payoffs:
⎛ n ⎞
vi(ai, a−i) = ln(ai) + ln κ − ∑ aj ,
⎝ j=1 ⎠
where κ > 0 is a parameter for the “capacity” of the commons.
27
Nash Equilibrium Example: Tragedy of the Commons
Finding the best response function for a player i ∈ N :
∂vi(ai, a−i) 1 1
= − = 0,
∂ai ai κ − ∑n a
j=1 j
implying
κ − ∑j≠i aj
BRi(a−i) = .
2
If n = 2, then
κ − a2 κ − a1
BR1(a2) = and BR2(a1) = ,
2 2
so that
κ
a1 = BR1(BR2(a1)) = = a∗2.
∗ ∗
3
28
Nash Equilibrium Example: Tragedy of the Commons
For general n, the Nash equilibrium is
κ − ∑j≠i a∗j
a∗i = ,
2
κ − ∑j≠i a∗i
= (by symmetry),
2
κ − (n − 1)a∗i
= ,
2
n+1 ∗ κ
ai = ,
2 2
κ
a∗i = .
n+1
29
Nash Equilibrium Example: Tragedy of the Commons
Benthamite social welfare optimum (denoted here by ao):
∂ o o n n2
[n (ln(a ) + ln (κ − na ))] = o − =0
∂ao a κ − nao
implies
n n2
o = o ,
a κ − na
nκ − n2ao = n2ao,
κ
ao = .
2n
a∗ ≠ ao ⇒Nash equilibrium is inefficient
a∗ > ao ⇒Nash equilibrium results in “overgrazing” of the commons
30
Nash Equilibrium Example: Tragedy of the Commons
0
0 2 4 6 8 10 12 14
Blue: Individual Payoff, Yellow: Total Social Payoff
31