Introducing Game
Theory and its
Applications
This classic text, originally from the noted logician Elliot Mendelson, is intended to be an easy-
to-read introduction to the basic ideas and techniques of game theory. It can be used as a class
textbook or for self-study.
Introducing Game Theory and its Applications, Second Edition presents an easy-to-read intro-
duction to the basic ideas and techniques of game theory. After a brief introduction, the authors
begin with a chapter devoted to combinatorial games—a topic neglected or treated minimally
in most other texts. The focus then shifts to two-person zero-sum games and their solutions.
Here the authors present the simplex method based on linear programming for solving these
games and develop within this presentation the required background. The final chapter pres-
ents some of the fundamental ideas and tools of non-zero-sum games and games with more
than two players, including an introduction to cooperative game theory.
The book is suitable for a first undergraduate course in game theory, or a graduate course for
students with limited previous exposure. It is useful for students who need to learn some game
theory for a related subject (e.g., microeconomics) and have a limited mathematical back-
ground. It also prepares its readers for more advanced study of game theory’s applications in
economics, business, and the physical, biological, and social sciences.
The authors hope this book breeds curiosity into the subject as its design is meant to satisfy
readers. The book will prepare readers for deeper study of game theory applications in many
fields of study.
Elliott Mendelson is the late professor emeritus at Queens College in Flushing, New York,
USA. Dr. Mendelson obtained his bachelor’s at Columbia University and his master’s and doc-
toral degrees at Cornell University and was elected afterward to the Harvard Society of Fellows.
In addition to his other writings, he is the author of another CRC Press book, Introduction to
Mathematical Logic, Sixth Edition.
Dan Zwillinger has more than 35 years of proven technical expertise in numerous areas of
engineering and the physical sciences. He earned a PhD in applied mathematics from the
California Institute of Technology. He is the editor of CRC Standard Mathematical Tables and
Formulas, 33rd Edition and also Table of Integrals, Series, and Products, by Gradshteyn and
Ryzhik. He serves as the series editor for the CRC Series of Advances in Applied Mathematics.
Advances in Applied Mathematics
Series Editor: Daniel Zwillinger
Observability and Mathematics
Fluid Mechanics, Solutions of Navier-Stokes Equations, and Modeling
Boris Khots
Handbook of Differential Equations, Fourth Edition
Daniel Zwillinger, and Vladimir Dobrushkin
Experimental Statistics and Data Analysis for Mechanical and Aerospace Engineers
James Middleton
Advanced Engineering Mathematics with MATLAB®, Fifth Edition
Dean G. Duffy
Handbook of Fractional Calculus for Engineering and Science
Harendra Singh, H. M. Srivastava, Juan J. Nieto
Advanced Engineering Mathematics
A Second Course with MATLAB®
Dean G. Duffy
Quantum Computation
Helmut Bez and Tony Croft
Computational Mathematics
An Introduction to Numerical Analysis and Scientific Computing with Python
Dimitrios Mitsotakis
Delay Ordinary and Partial Differential Equations
Andrei D. Polyanin, Vsevolod G. Sorkin, Alexi I. Zhurov
Clean Numerical Simulation
Shijun Liao
Multiplicative Partial Differential Equations
Svetlin Georgiev and Khaled Zennir
Engineering Statistics
A Matrix-Vector Approach with MATLAB®
Lester W. Schmerr Jr.
General Quantum Numerical Analysis
Svetlin Georgiev and Khaled Zennir
An Introduction to Partial Differential Equations with MATLAB®
Matthew P. Coleman and Vladislav Bukshtynov
Handbook of Exact Solutions to Mathematical Equations
Andrei D. Polyanin
Introducing Game Theory and its Applications, Second Edition
Elliott Mendelson and Daniel Zwillinger
Introducing Game
Theory and its
Applications
Second Edition
Elliott Mendelson and Daniel Zwillinger
MATLAB® and Simulink® are trademarks of The MathWorks, Inc. and are used with permission.
The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use
or discussion of MATLAB® or Simulink® software or related products does not constitute endorse-
ment or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the
MATLAB® and Simulink® software.
Second edition published 2025
by CRC Press
2385 NW Executive Center Drive, Suite 320, Boca Raton FL 33431
and by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
© 2025 Elliott Mendelson and Daniel Zwillinger
First edition published by Taylor and Francis 2004
Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com
or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923,
978-750-8400. For works that are not available on CCC please contact mpkbookspermissions@tandf.
co.uk
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.
ISBN: 978-0-367-50791-6 (hbk)
ISBN: 978-1-032-81180-2 (pbk)
ISBN: 978-1-003-05127-5 (ebk)
DOI: 10.1201/9781003051275
Typeset in Latin Modern font
by KnowledgeWorks Global Ltd.
Publisher’s note: This book has been prepared from camera-ready copy provided by the authors.
Contents
Preface vii
Introduction ix
1 Combinatorial games 1
1.1 Definition of combinatorial games . . . . . . . . . . . . . . . 1
1.2 Fundamental theorem of combinatorial games . . . . . . . . 6
1.3 Nim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Hex and other games . . . . . . . . . . . . . . . . . . . . . . 17
1.5 Tree games . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Grundy functions . . . . . . . . . . . . . . . . . . . . . . . . 27
1.7 Bogus Nim-sums . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.8 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . 40
2 Two-person zero-sum games 41
2.1 Games in normal form . . . . . . . . . . . . . . . . . . . . . 41
2.2 Saddle points and equilibrium pairs . . . . . . . . . . . . . . 43
2.3 Maximin and minimax . . . . . . . . . . . . . . . . . . . . . 47
2.4 Mixed strategies . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.5 2 × 2 matrix games . . . . . . . . . . . . . . . . . . . . . . . 64
2.6 2 × n, m × 2, and 3 × 3 matrix games . . . . . . . . . . . . . 68
2.7 Linear programming . . . . . . . . . . . . . . . . . . . . . . . 78
2.8 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . 86
3 Solving two-person zero-sum games using LP 87
3.1 Perfect canonical linear programming problems . . . . . . . . 87
3.2 The simplex method . . . . . . . . . . . . . . . . . . . . . . . 90
3.3 Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.4 The perfect phase of the simplex method . . . . . . . . . . . 95
3.5 The Big M method . . . . . . . . . . . . . . . . . . . . . . . 98
3.6 Bland’s rules to prevent cycling . . . . . . . . . . . . . . . . 102
3.7 Duality and the simplex method . . . . . . . . . . . . . . . . 107
3.8 Solution of game matrices . . . . . . . . . . . . . . . . . . . . 110
3.9 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . 116
v
vi Contents
4 Non-zero-sum games and k-person games 117
4.1 The general setting . . . . . . . . . . . . . . . . . . . . . . . 117
4.2 Nash equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.3 Graphical method for 2 × 2 matrix games . . . . . . . . . . . 124
4.4 Inadequacies of Nash equilibria and cooperative games . . . 137
4.5 The Nash arbitration procedure . . . . . . . . . . . . . . . . 143
4.6 Games with two or more players . . . . . . . . . . . . . . . . 149
4.7 Coalitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.8 Games in coalition form . . . . . . . . . . . . . . . . . . . . . 155
4.9 The Shapley value . . . . . . . . . . . . . . . . . . . . . . . . 158
4.10 The Banzhaf power index . . . . . . . . . . . . . . . . . . . . 162
4.11 Imputations . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4.12 Strategic equivalence . . . . . . . . . . . . . . . . . . . . . . 166
4.13 Stable sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.14 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . 169
5 Imperfect information games 170
5.1 The general setting . . . . . . . . . . . . . . . . . . . . . . . 170
5.2 Complete information games in extensive form . . . . . . . 170
5.3 Imperfect information games in extensive form . . . . . . . 173
5.4 Games with random effects . . . . . . . . . . . . . . . . . . . 174
5.5 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . 181
6 Computer solutions to games 182
6.1 Zero-sum games — invertible matrices . . . . . . . . . . . . . 182
6.2 Zero-sum games — linear program problem (LP) . . . . . . . 187
6.3 Linear programming capability — Python with PuLP . . . . 193
6.4 Non-zero sum games — linear complementarity problem (LCP) 194
6.5 Special game packages . . . . . . . . . . . . . . . . . . . . . 197
6.6 Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . 204
Appendices 205
Appendix A Utility theory 205
Appendix B Nash’s theorem 208
Appendix C Finite probability theory 210
Appendix D Calculus and differentiation 220
Appendix E Linear algebra 222
Appendix F Linear programming 228
Appendix G Named games and game data 236
Answers to selected exercises 245
Bibliography 263
Index 269
Preface
This book is intended to be an easy-to-read introduction to the basic ideas and
techniques of game theory. It can be used as a class textbook or for self-study.
It is useful for students who need to learn some game theory for a related
subject (e.g., microeconomics) and have a limited mathematical background.
In recent years, and especially since the 1994 Nobel Prize in Economics
was awarded to John C. Harsanyi, John F. Nash Jr., and Reinhard Selten for
their game theory research, people have been intrigued by games. The authors
hope that this curiosity may be satisfied to some extent by reading this book.
The book will prepare them for a deeper study of game theory applications
in economics and business and in the physical, biological, and social sciences.
The first part of the text (Chapter 1) is devoted to combinatorial games.
These games tend to be more recreational in nature and include board games
like chess and checkers (and even a simple children’s game like Tic-Tac-Toe).
There are also many games that are challenging even for accomplished math-
ematicians, and our study covers various techniques for successful play. The
rest of the book deals with the general theory of games.
Chapters 2 and 3 contain a thorough treatment of two-person zero-sum
games and their solutions, which is the most well-understood part of game
theory and was developed the 1920s through 1950s. John von Neumann was
responsible, almost by himself, for inventing the subject.
Chapter 4 introduces the reader to games that are not zero-sum and/or
involve more than two players. Here we consider cooperative games where
players are not constrained to act as isolated individuals but can form coali-
tions with other players. Here we have only presented the basic ideas so that
readers can venture further on their own. This chapter includes some of the
applications that make game theory so interesting, for example, in economics,
in the theory of political power, and in evolutionary biology. We assume no
previous acquaintance with the details of these subjects. Our treatment is in-
tended to provide enough of the fundamental concepts and techniques of game
theory to make it easier to understand more advanced applications.
Chapter 5 has a short discussion of imperfect information games. These are
games for which all information during game play is not known. For example,
when playing card games, such as poker, one typically knows their own cards
but not what cards the opponents have. This is clearly different from chess
say, where the positions of the pieces give complete information to all. Many
real-world problems addressable by game theory are imperfect information
vii
viii Preface
games. The general approach for solving these games is described and illus-
trated.
Chapter 6 illustrates how games can be solved using computers. A new
game solution technique is also introduced, which is only practical on a com-
puter. Games solved earlier in the book (zero-sum, non-zero-sum, and im-
perfect information games) are re-solved using computer programs in Julia,
Mathematica, Octave, Python, and R. Several special purpose game packages
are also illustrated.
The book includes multiple short Appendices; two contain key elements
not required for a first game theory overview, four include review material,
which may be useful for some readers, and one includes several common and
popular game names and game data, which may be used for investigations.
Appendix A sketches an axiomatic treatment of utility theory. Although
the utility concept is necessary for a proper understanding of game theory
(and economics in general), it is a distraction for readers first learning game
theory. Appendix B contains two proofs of Nash’s theorem on the existence of
Nash equilibria. They are in an appendix since they depend on results from
topology, with which many readers will not be familiar.
Appendix C reviews finite probability theory and provides what is nec-
essary to understand the probability applications in the text. Appendix D
includes a review of differentiation used in one game solution techniques. Ap-
pendix E has a linear algebra review. It introduces vector notation, needed
both for computer implementations and to compactly write linear programs,
which are discussed in Appendix F. Appendix G contains names of many
games that often appear in the literature and also data for coalition games.
Answers to Selected Exercises contains brief solutions to many of the ex-
ercises, enough for readers to be sure that they understand what they have
learned.
If you find any errors in this book, please send them to ZwillingerBooks@
gmail.com. If any errata are found, they will appear at https://www.
mathtable.com/gt.
Elliott Mendelson created a fantastic first edition of this book. It has been
a pleasure to expand upon the strong foundation he created. Sadly, Elliott is
not around to see this expanded second edition.
Elliott Mendelson
2020
Dan Zwillinger
Newton, Massachusetts
January 2024
Introduction
Game theory is the mathematical study of games. By a game we mean not
only recreational games like chess or poker. We also have in mind more se-
rious “games,” such as a contract negotiation between a labor union and a
corporation, or an election campaign. Thus, if “game” were not already the
established term, “competition” might be more appropriate.
Apart from satisfying our curiosity about clever ways of playing recre-
ational games, game theory has also become the focus of intense interest in
studies of business and economics, where the emphasis is on decision-making
in a competitive environment. In fact, the language of game theory is becom-
ing more and more a part of main-stream economics. In this book, we will not
assume any previous knowledge of economics.
In recent years, game theory has also entered into diverse fields such as bi-
ology (evolutionary stability, RNA phage dynamics, viral latency, chromosome
segregation subversion in sexual species, Escherichia coli mutant proliferation
under environmental stress, and population dynamics) and political science
(for example, power distribution in legislative bodies). Additionally, games
also are playing a large role in the farthest reaches of axiomatic set theory in
studies of large cardinals and projective determinacy, but we shall not pursue
that here.
Now let us become acquainted with a few of the ideas of game theory.
Every game has two or more players. The rules of the game specify how
the game is to start, namely, which player or players must start the game
and what the situation is at the start. That situation is called the initial
position. Other positions may also occur in the game. At each position, the
rules indicate which player (or players) makes a move from that position and
what are the allowable moves from that position to other positions. For each
game position P , there must be at least one sequence of moves from the initial
position to P . (Otherwise, that position could never enter into the play of the
game and would be superfluous.) Some positions are designated as terminal
positions; no moves are allowed from such a position so the game ends when
such a position is reached. A play of the game consists of a sequence of moves
starting at the initial position and ending at a terminal position. If the rules
of the game are such that an infinite sequence of moves is impossible, then the
game is said to be finite. Chess and checkers, for example, are finite because
the rules state that a play of the game must end when the same position has
recurred a certain number of times.
ix
x Introduction
Every terminal position determines a payoff to each of the players.1
In many games, these payoffs are numbers. Usually these numbers are the
amounts of money that the various players win or lose at that terminal po-
sition (positive numbers indicating winnings and negative numbers losses). If
the sum of the payoffs at each terminal position is zero, then the game is called
a zero-sum game. Most recreational and gambling games can be considered
zero-sum; whatever some players win has to be balanced by the losses of the
other players. On the other hand, many games in economics and politics are
not zero-sum. For example, a collective bargaining agreement might result in
gains or losses for both sides.
Sometimes, a numerical payoff may indicate simply whether the player has
won or lost. For example, 1 may indicate a win and −1 a loss. In games in
which draws (ties) can occur, a draw may be designated by a 0. Alternatively,
a win could be shown by W, a loss by L, and a draw by D.
Very simple games can be pictured by a diagram, more precisely by a di-
rected graph. The various positions are represented by nodes (that is, vertices
or points) of the graph. Each node, shown as a small open circle, is labeled
to indicate which player (or players) must move at that position. A possible
move from a given position to another position is represented by an arrow.
These arrows can be labeled for easy reference. The node representing the
initial position may appear at the top of the diagram or may be specified in
some other way. At each terminal node (that is, a node representing a termi-
nal position), the payoffs will be shown by a suitable n-tuple. For example,
if there are three players A, B, and C, then the triple (2, −1, 4) at a given
terminal node indicates that a play of the game ending at that node yields
payoffs of 2, −1, and 4 to A, B, and C, respectively. (If the game in question
ends only in a win (W) or a loss (L) for each player, then the triple (L,L,W)
would indicate that A and B have lost and C has won.)
Example 0.1 Consider the following game for two players A and B. Start
with a pile of four sticks. At each move, a player can remove either one stick
or two sticks. The last player to move wins and the other player loses. Player
A has the first move. The directed graph for this game is shown in Figure 0.1.
(Each segment is assumed to be an arrow directed downward.) If you are
player A, what would you do on your first move?
Until now, we have ignored a large class of important games, those that
involve an element of chance. Card games like poker and casino games like
roulette or craps always depend on the outcomes of uncertain events, like a
spin of a roulette wheel or the throw of a pair of dice. In order to take such
games into account, let us allow some positions to be positions at which the
choice of one of the possible moves is not made by one of the players but by
a device that selects each of the possible moves with a certain probability.
1 In some games, payoffs can also occur during game play, before a terminal position is
reached.
Introduction xi
FIGURE 0.1: Directed graph for game in Example 0.1.
For example, the device might toss a coin and then choose one move if the
coin shows a Head and another move if the coin shows a Tail. A position
that assigns probabilities to the various possible moves from that position is
called a random position and a move from such a position is called a random
move. In a diagram of a game, a random position can be labeled with a special
symbol, such as “R,” and the numerical probabilities of the moves from that
position can be attached to the arrows representing those moves.
A game in which there are no random positions is said to be deterministic.
Thus, the outcome of such a game depends only on the choices made by the
players themselves and involves no element of chance. Chess, checkers, and
Tic-Tac-Toe are examples of deterministic games.
Following is a very simple example of a two-person nondeterministic game:
A fair coin is thrown. Without seeing the result, player B guesses whether the
coin shows a Head or a Tail. If B’s guess is correct, player A pays B one dollar,
whereas, if B’s guess is incorrect, B pays A one dollar. The graph of this game
is shown in Figure 0.2.
Another characteristic of many games is that the outcome of certain moves
may not be known to all of the players. For example, in certain auctions a
player’s bid is known only to that player. In many card games, the initial
random move that consists of a distribution of cards is such that each player
only knows the cards that were dealt to that player. Thus, in such cases,
a player whose turn is to move may not know which of several positions
xii Introduction
FIGURE 0.2: Directed graph for a game involving a fair coin toss.
that player may be in.2 When such situations cannot occur, that is, when
the outcome of every possible move is known to all the players, the game is
called a game with perfect information. Chess, checkers, and Tic-Tac-Toe are
examples of games with perfect information.
An example of a game without perfect information is Matching Pennies.
In that game, player A puts down a penny, but player B does not see which
side is face up. Player B then puts down a penny. If both pennies show Heads
or both show Tails, B wins a penny from A. Otherwise, A wins a penny from
B. This game is finite, deterministic, and zero-sum.
Exercise 0.2 For each of the following games, determine whether it has the
properties of being finite, deterministic, zero-sum, or having perfect informa-
tion. Describe the initial positions and the terminal positions, if these are
not already specified. When feasible, draw a directed graph for the game and
indicate the payoffs at each terminal point.
1. Chess
2. Tic-Tac-Toe
3. The game in Example 0.1. (See Figure 0.1.)
4. The first player A names a positive integer n. Then the second player B
names a positive integer k. If the sum n + k is even, A wins one dollar
from B. If the sum n + k is odd, B wins one dollar from A.
5. Two players A and B alternately name integers from 1 through 9 (repeti-
tions are allowed) until the sum of all the numbers mentioned is greater
than or equal to 100. The last player to name a number pays one dollar
to the other player.
6. Players A and B, one after the other, show one or two fingers behind
their backs, without the other player seeing the opponent’s choice. Then
A and B guess the number of fingers displayed by their opponent. If
both guesses are right or both wrong, the result is a draw. If only one
player guesses correctly, then that player wins from the other player an
amount in dollars equal to the sum of the number of fingers displayed
by both players.
2 So-called imperfect information games are described in Chapter 5.
Introduction xiii
7. A fair coin is tossed. If a Head turns up, player A pays player B one
dollar. If a Tail turns up, B pays A one dollar.
8. (Simple poker) One card is dealt at random to player A from an ordinary
deck. Player B does not see what that card is. Player A then either
“folds,” pays player B one dollar, and the game is over, or A “bets”
three dollars. If A bets, then B can either “fold” and pay one dollar
to A and the game is over, or B can “call.” If B calls, then A wins three
dollars from B if A’s card is red, or A loses three dollars to B if A’s card
is black.
9. A fair coin is tossed until a Head turns up. If the number of tosses is
even, player A pays player B one dollar. If the number of tosses is odd,
B pays A one dollar.
A fundamental notion in game theory is that of strategy. The original
meaning of the word was “the art of directing the larger military movements
and operations of a campaign,” but the word has come to be used also in the
wider non-military sense of any plan for attaining one’s ends. In game theory,
we have in mind a special and much more precise meaning. A strategy for a
player is a specification that tells the player what that player should do in
every situation that might arise during a game. A strategy leaves nothing to
the imagination; once a strategy is chosen, the player’s moves are completely
determined. When all the players choose their strategies, the course of the
game and its outcomes are determined; the players could leave the actual
performance of the moves to assistants or to machines.
For almost all games, there are so many strategies that the description of all
of them is a practical impossibility. In fairly complicated games like checkers
or chess, the specification of any useful strategy would be very difficult to
formulate and extremely difficult to write down, although something of the
sort is done when computers are programmed to play checkers or chess. Good
checkers and chess players depend upon their own intuition and experience,
not upon an explicit strategy.
Some strategies for a player may be better than others. In a game with
numerical payoffs, a strategy is said to be non-losing if the player following
that strategy always receives a nonnegative payoff, no matter what the player’s
opponents do. Similarly, a strategy is said to be a winning strategy if the player
following that strategy always receives a positive payoff, no matter what the
player’s opponents do. In a game in which Win, Lose, or Draw are the possible
payoffs, a winning strategy is one that guarantees a Win, no matter what the
opponents do. A win-or-draw strategy is simply a non-losing strategy, that is,
one that guarantees a Win or a Draw, no matter what the opponents do. In
general, it is possible that no player has a winning or a non-losing strategy.
(However, in Chapter 1, we shall study a large class of two-person games in
which at least one of the players must have a non-losing strategy.)
Example 0.3 Starting with a pile of four sticks, players A and B move al-
ternately, taking away one or two sticks at each move. The last person to
xiv Introduction
move loses. (Notice that, in a certain sense, this is the opposite of the game in
Example 0.1, where the last person to move wins.) Player B has the following
winning strategy in this game. If A starts by taking one stick, then B should
remove two, forcing A to remove the last stick. If A starts by removing two
sticks, then B should take away one stick, again forcing A to remove the last
stick. In either case, B wins.
Exercise 0.4 Find a winning strategy for one of the players in games (3),
(4), and (5) of Exercise 0.2.
Exercise 0.5 In the game of Example 0.1, how many strategies does player A
have and how many strategies does B have?
Games in which the payoffs have numerical values may give a false impres-
sion of precision because the real value of the payoffs may differ significantly
from player to player. For example, a payoff of one hundred dollars to a poor
person may be of much greater value to that person than the same payoff to a
millionaire. In order to take this into account, the payoffs should be adjusted
so that they represent their value or utility to the various players. For each
player, there would have to be a utility function that transforms numerical
payoffs into numbers that measure the actual value of those payoffs for that
player. This leads to a quagmire of complicated, difficult issues that are not
directly connected with game theory itself.3 For that reason and in order not
to make this introductory text needlessly complex right at the beginning, we
shall generally sidestep the issue of utility and treat numerical payoffs in a
straightforward way. However, there will be occasions when the issue cannot
be ignored.
The book’s pedagogical strategy will be to start with relatively simple con-
cepts and theorems. This does not mean that the early going will always be
easy. For example, the combinatorial games that form the subject of Chap-
ter 1 are conceptually simple and the basic theorem about them has a simple
proof, but the analysis of individual games can involve problems of the highest
order of difficulty. In each succeeding chapter, further complexities and more
demanding arguments will be encountered, but we shall try to introduce them
in small, digestible bites. As much as possible, we shall avoid the use of com-
plicated mathematical concepts and notation in favor of ordinary language
and familiar symbolism.
Chapter 1 deals with certain games, called combinatorial games, that are of
general interest. Many of those games are recreational in nature. Readers who
are primarily interested in the connections of game theory with business and
economics, politics, or biology, and other sciences can skim through Chapter 1
and then move on to Chapter 2.
3 Helpful discussions may be found in Luce and Raiffa [1957] and in the ground-breaking
treatment in von Neumann and Morgenstern [1944]. There is a very brief introduction to
an axiomatic treatment of utility theory in Appendix A.
Chapter 1
Combinatorial games
1.1 Definition of combinatorial games
Combinatorial games are a large class of games that includes many familiar
games and also offers many opportunities for mathematical ingenuity. At the
same time, the relative simplicity of their definition makes them ideal for an
initial entry into the theory of games.
By a combinatorial game we shall mean any game with the following char-
acteristics:
1. There are only two players.
2. It is deterministic; that is, there are no random moves.
3. It is finite, that is; every play of the game ends after a finite number of
moves.
4. It is a game with perfect information; that is, the result of every move
by one player is known to the other player.
5. It is a zero-sum game. (Games that end with a win for one player and a
loss for the other player, or a draw for both players, qualify as zero-sum
games. This is a consequence of awarding a payoff of 1 for a win, a payoff
of −1 for a loss, and a payoff of 0 for a draw.)
Thus, combinatorial games are finite, deterministic, zero-sum, two-person
games with perfect information.1
A famous example of a combinatorial game is Tic-Tac-Toe (also known as
Noughts and Crosses). The game is played on a grid of three rows and three
columns. One player uses crosses (Xs) and the other player noughts (Os).
The players alternately place their own symbols (X or O) in any unoccupied
square. The X player moves first. The first player to completely fill a row,
column, or either diagonal with that player’s symbol wins. Figure 1.1 shows
some examples of Tic-Tac-Toe results.
1 The term “combinatorial game” has been used in various ways in game theory. Our
definition is one way of making it precise.
DOI: 10.1201/9781003051275-1 1
2 Introducing Game Theory and its Applications, Second Edition
FIGURE 1.1: Different Tic-Tac-Toe results.
Exercise 1.1 Which of the following are combinatorial games?
1. The first player A names a positive integer n. Then the second player B
names a positive integer k. If the product nk is even, A wins one dollar
from B; if the product nk is odd, B wins one dollar from A.
2. Removal games.
(a) Initially, there is a pile of 5 sticks. Each of the two players alter-
nately removes 1 or 2 sticks. The game continues until there are no
sticks left. The last person to move loses.
(b) The same game as in (a), except that the last person to move wins.
3. The first player A chooses one of the numbers 1 or 2, and then the game
is over (without player B making any moves at all). A pays B one dollar
if A has chosen 1, and A pays B two dollars if A has chosen 2.
4. Chess.
Recall your childhood experience with Tic-Tac-Toe. You probably will re-
member that, after a while, you learned how to make sure that you never lost,
although it is likely that you never explicitly formulated what you had to do
to ensure at least a draw. If your opponent was as experienced as you, the
game would always end in a draw. Hence, the game no longer was of interest
and you stopped playing it. It is an amazing fact that Tic-Tac-Toe is, in an
important way, typical of all combinatorial games. We shall prove later that,
in any combinatorial game, either one player has a winning strategy; that is,
that player can always ensure a win, or both players have non-losing strategies;
that is, they can make sure that they never lose.
However, in even a simple game like Tic-Tac-Toe, the description of a
strategy can be enormously complicated. Therefore, we shall first look at some
very simple games.
Example 1.2 There is a pile of 3 sticks. The players alternately remove 1 or
2 sticks from the pile. The last person to move loses. Let A be the player who
moves first and let B be the other player. Player A has two strategies (A1 )
and (A2 ).
(A1 ) Remove 1 stick from the pile. (No further specification is required. If
player B then removes 1 stick, player A must remove the last stick. If
player B removes two sticks, the game is over.)
(A2 ) Remove 2 sticks. (Player B must then remove the last stick.)
Combinatorial games 3
Player B also has two strategies (B1 ) and (B2 ).
(B1 ) If player A took 1 stick, remove 1 stick. (Player A must then remove the
last stick.)
(B2 ) If player A took 1 stick, remove 2 sticks. (Then the game is over.)
Notice that player B need not specify what he would do if player A took
away 2 sticks. In that case, B would be forced to take away the last stick.
Player A has a winning strategy, (A2 ). When A removes 2 sticks, B must
remove the last stick and B loses. Observe that (A1 ) is not a winning strategy;
if B responds with strategy (B1 ), that is, if B removes a second stick, then A
is left with one stick to remove and loses.
Example 1.3 Consider the same game as in Example 1.2, except that the
last person to move wins the game. The possible strategies are unchanged.
However, player B now has a winning strategy, (B2 ).
Example 1.4 Consider a game like that of Example 1.3, except that the
initial pile consists of 13 sticks. The players move alternately, taking 1 or 2
sticks, and the last person to move wins. (As always, player A moves first.)
There are so many strategies for both players that we shall not bother to list
them. However, it turns out that player A has the following simple winning
strategy:
• Player A starts off by removing 1 stick, leaving 12.
• Thereafter,
– whenever player B removes 1 stick, A follows by removing 2 sticks.
– whenever player B removes 2 sticks, A follows by removing 1 stick.
Player A must eventually remove the last stick. The reason for this is that,
after each of A’s moves, the number of sticks left is always a multiple of 3
(that is, 12, 9, 6, 3, and, finally, 0). B will never find a pile consisting of just
1 or 2 sticks and, therefore, B will not be the last person to move.
Example 1.5 For the game of Tic-Tac-Toe, let us describe a non-losing strat-
egy for the first player A, the crosses (Xs) player.
First move: Move into the center.
Case 1. The second player B, the noughts (Os) player, follows by moving
into a corner square. By the symmetry of the Tic-Tac-Toe board, we may
assume that it is the upper left-hand corner.
O
X
4 Introducing Game Theory and its Applications, Second Edition
Player A should now counter with:
O
X
X
Player B’s next move is forced:
O O
X
X
(When we say that a move is forced, we mean that any other move would lead
to immediate defeat.)
Player A has to reply as follows, to avoid losing:
O X O
X
X
B’s next move again is forced:
O X O
X
X O
A continues as follows:
O X O
X X
X O
B is forced to make the following move:
O X O
X X O
X O
A fills the remaining square:
O X O
X X O
X O X
and the result is a draw. Thus, in Case 1, either player A wins (if B fails to
make the indicated forced moves) or there is a draw.
Combinatorial games 5
Case 2. Player B follows A’s first move by moving into a non-corner square.
By symmetry, we may assume that B moves into the top row:
O
X
Player A quickly follows with:
X O
X
B’s next move is forced:
X O
X
O
Player A then applies the coup de grace:
X O
X
X O
This yields a double threat along the first column and along one of the di-
agonals. No matter how B moves, A can win on A’s next move. So, in Case
2, player A wins. Thus, in either case, player A will not lose when he plays
according to the indicated instructions.
Exercise 1.6 Find winning strategies in the following games.
1. Player A names a positive integer n and player B then names a positive
integer k. A wins $1 from B if the sum n + k is odd, and B wins $1 from
A if the sum n + k is even.
2. The game of Exercise 1.1(1)
3. The same game as Example 1.2 except that the initial pile contains 4
sticks. (Remember that 1 or 2 sticks are removed each time, and the last
person to move loses.)
4. The same game as Example 1.2 except that the initial pile contains 5
sticks.
5. Consider the general case of the game in Example 1.2, where the initial
pile contains n sticks. Show that the second player B has a winning
strategy when n is of the form 3k + 1 (that is, 1, 4, 7, 10, 13, . . . ), and
the first player A has a winning strategy in all other cases.
Exercise 1.7 Consider the general case of the game in Example 1.3, where
the initial pile contains n sticks. Recall that the players remove 1 or 2 sticks
at a time and the last person to move wins. Describe those values of n for
which the second player B has a winning strategy and those values of n for
which the first player A has a winning strategy.
6 Introducing Game Theory and its Applications, Second Edition
Exercise 1.8 Find a non-losing strategy for the second player (the noughts
player) in Tic-Tac-Toe. (There are more cases to be handled here than were
necessary for the first player’s non-losing strategy in Example 1.5.)
Exercise 1.9 Find a winning strategy for the first player in Exercise 0.2(5)
of the Introduction.
Exercise 1.10 Find a winning strategy for the first player in Exercise 1.1(3).
1.2 Fundamental theorem of combinatorial games
Fundamental theorem In any combinatorial game, at least one of the
players has a non-losing strategy. (If draws, that is, zero payoffs, are impossi-
ble, it follows that one of the players has a winning strategy.) Recall that, by
definition of a combinatorial game, the game concludes after a finite number
of moves. (Zermelo [114])
Proof: Let us assume, for the sake of contradiction, that neither the first
player A nor the second player B has a non-losing strategy. We shall show
that this would imply the possibility of a play of the game with infinitely
many moves, contradicting the finiteness condition for combinatorial games.
By a non-losing position for player A we mean a situation in the play of the
game from which point player A then has a non-losing strategy. Similarly, we
define a non-losing position for player B as a situation in the play of the game
from which point player B then has a non-losing strategy. By our assumption
that neither player has a non-losing strategy for the whole game, it follows
that the initial position P1 of the game is a non-losing position for neither A
nor B. In this position P1 , it is A’s turn to move.
(1) Any move that player A can make from position P1 cannot lead to a
non-losing position for A. (If there were such a move M leading to a
non-losing position P2 for A, then the original position P1 would have
been a non-losing position for A. In such a case, A’s non-losing strategy
starting at P1 would be to make the move M and then follow the non-
losing strategy available for A at the new position P2 .)
(2) There is at least one possible move M leading from the original position
P1 to a new position P2 , which is not a non-losing position for
player B. First of all, the game cannot end at position P1 . If it did, then,
since this is a zero-sum game, at least one of the players would receive
a nonnegative payoff. But then, for that player, position P1 would au-
tomatically be a non-losing position, contradicting our original assump-
tion.2 Second, if all of the moves available to player A at position P1
2 Notice that, in the game in question, we can include the “null strategy,” which simply
Combinatorial games 7
led to non-losing positions for B, then P1 would be a non-losing position
for B, since, no matter how A moved from position P1 , B would be able
to use a non-losing strategy for the remainder of the game.
In accordance with (2), let A make a move M leading from the original
position P1 to a new position P2 which is not a non-losing position for B. By
(1), P2 is also not a non-losing position for A. Thus, P2 is a non-losing position
for neither A nor B. Now, by repeating the same argument that we have just
given, we see that there must be a move from P2 to a new position P3 ; that
is again a non-losing position for neither A nor B. In this way, we conclude
that there is a possible infinite sequence of moves P1 → P2 → P3 → . . . ,
contradicting the finiteness of a combinatorial game.
Corollary of the fundamental theorem
In a combinatorial game, if one player X has a non-losing strategy, but has
no winning strategy, then the other player Y must have a non-losing strat-
egy. Consequently, in a combinatorial game, either one player has a winning
strategy or both players have non-losing strategies.
Proof: Let G denote the original combinatorial game. Assume Player X has
a non-losing strategy for G but not a winning strategy. Define a new combi-
natorial game G∗ that is the same as G except with respect to certain payoffs.
Whenever the original game G resulted in a draw, let the new game G∗ assign 1
to player Y and −1 to player X. Thus, G∗ has no draws.
The fundamental theorem tells us that X or Y has a non-losing strategy in
game G∗ . Since there are no draws in G∗ , it follows that X or Y has a winning
strategy in G∗ . Assume that X has a winning strategy in G∗ . This same
strategy would be a winning strategy in the original game G. (For, whenever
X receives a positive payoff in G∗ , X receives the same payoff in G.) But this
contradicts our hypothesis that X has no winning strategy in G.
Therefore, it is impossible for X to have a winning strategy in G∗ . The only
possibility left is that Y has a winning strategy in G∗ . But such a strategy
would be a non-losing strategy for Y in G. For, whenever this strategy does
not result in a draw in G, Y receives the same payoff in G as in G∗ . But this
strategy, being a winning strategy for Y in G∗ , would yield in such cases a
positive payoff for Y in G∗ , and hence also in G. Thus, this strategy never
results in a loss for Y in the original game G.
Now consider any combinatorial game. By the fundamental theorem, at
least one of the players X has a non-losing strategy. If X does not have a
winning strategy, then the Corollary tells us that the other player Y also has
consists of doing nothing. If a particular game always must end before a player is required
to make a move, then that player has a null strategy. If the game never ends with a loss for
the player, then the null strategy is a non-losing strategy for that player.
8 Introducing Game Theory and its Applications, Second Edition
a non-losing strategy. In that case, if X and Y both use a non-losing strategy,
then the game must end in a draw. Thus, there are two possibilities:
1. Exactly one of the players has a winning strategy.
2. Both players have non-losing strategies.
In looser language, we can say that any combinatorial game is either unfair
(that is, one of the players always will win by correctly playing a winning
strategy) or it is uninteresting (that is, if both players correctly play their
non-losing strategies, a draw always results). Note that, if draws are impossible
in a given combinatorial game, then one of the players must have a winning
strategy.
Our proof of the fundamental theorem is only an existence proof. It shows
that the non-existence of non-losing strategies for both players is impossible.
But it does not tell us how to find such strategies. From a practical point
of view, therefore, it is not surprising that many combinatorial games are
still unsolved. In the first place, we may not know which of the players has
a non-losing strategy. In chess, for example, experience seems to indicate a
slight advantage for the first player, White, but it is still conceivable that
the second player, Black, might have a non-losing strategy, or even a winning
strategy. Computer programs that have been devised to play chess are not
based on a winning or a non-losing strategy, but rather use rules culled from
the experience of chess experts and (increasingly) the results from machine
learning. Chess is so complex that it may never yield to a definitive analysis.
Note, however, that the game of Checkers has been definitively solved by
analyzing all 500 quintillion (5 × 1020 ) positions; see Schaeffer et al. [87].
A second limitation on the application of the fundamental theorem is that
we may know which player has a non-losing strategy without actually be-
ing able to describe such a strategy for practical purposes. Examples of this
predicament will turn up when we examine the game of Hex.
Exercise 1.11 Find winning or non-losing strategies in the following games.
1. Starting with a pile of 14 sticks, two players alternately remove 1, 2, or 3
sticks. The last person to move wins. (Hint: Since draws are impossible,
one of the players must have a winning strategy. The first player A can
ensure that, after each of the second player B’s moves, the number of
sticks left is not a multiple of 4. Since 0 is a multiple of 4, B cannot
remove the last stick. Compare Example 1.4.)
2. Let n be a positive integer. Starting with a pile of n sticks, two players
alternately remove 1, 2, or 3 sticks. The last person to move wins. (Hint:
The answer depends on n. First consider the case where n is not a
multiple of 4.)
3. Let n be a positive integer. Starting with a pile of n sticks, two players
alternately remove 1, 2, or 3 sticks. The last person to move loses.
4. Starting with a pile of 14 sticks, two players alternately remove 1, 2, 3,
or 4 sticks. The last person to move wins.
Combinatorial games 9
5. Let n be a positive integer and let k be an integer such that 1 ≤ k <
n. Starting with a pile of n sticks, two players alternately remove any
number of sticks between 1 and k. The last person to move wins.
6. The same game as (5) except that the last person to move loses.
Exercise 1.12 (Symmetry) Find winning or non-losing strategies in the fol-
lowing games.
1. Two players take turns placing pennies on a large table (which is assumed
to be circular or rectangular) until no further pennies can be put down.
No penny may overlap another penny. The last person to put down a
penny wins.
2. An 8 × 8 checkerboard contains 64 one-inch squares. Two players take
turns covering two adjacent squares with 1×2-inch dominoes. The game
is played until no further move is possible. The last person to move is
the winner.
3. ◦ Same as (1) except that the last person to move loses.3
4. ◦ Same as (2) except that the last person to move loses.
5. Consider 25 sticks arranged in a 5 × 5 square:
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
Players alternately take any number of sticks from a single row or col-
umn. At least one stick must be taken. There is an additional restriction
that a group of sticks cannot be taken if the group contains a gap. (For
example, if the second stick of the first row already has been removed, a
player cannot, in one move, remove the first and third sticks of the first
row.) The last person to move wins.
6. Same as (5) except that there are six rows and six columns.
7. ◦ Same as (5) except that the last person to move loses.
8. Same as (2) on an n × n checkerboard. (This game is called CRAM. See
Cowen and Dickau [24].)
Exercise 1.13 (Variations and disguised forms of Tic-Tac-Toe) Find win-
ning or non-losing strategies in the following games.
1. (Martin Gardner) From an ordinary deck of cards, the spades from the
ace to the nine (that is, ace, two, three, . . . , nine) are placed face up
on a table. Two players choose in turn from these cards. A card, once
chosen, cannot be chosen again. A winner is the first player to get three
cards adding up to fifteen (with the ace counting as one). Hint: Use the
following magic square:
3 The symbol “o” stands for “open” and means that a solution is not known to the
authors.
10 Introducing Game Theory and its Applications, Second Edition
2 7 6
9 5 1
4 3 8
Note that, in a magic square, the rows, column, and diagonals all add
up to the same number, in this case, fifteen.
2. (Leo Moser) Two players alternately choose a word from the following
list of nine words: pony, shin, easy, lust, peck, puma, jury, bred, warn.
A player who is the first to have chosen three words having a letter in
common wins. The same word cannot be chosen twice.
3. Opposite Tic-Tac-Toe. This is the same as ordinary Tic-Tac-Toe except
that the first person who completes a row, column, or diagonal with that
person’s marks loses. (Hint: Let the first person move into the center
and then copy the opponent’s moves symmetrically with respect to the
center. The second player’s strategy is less obvious.)
4. David and Jill play regular Tic-Tac-Toe. However, in this new game,
David wins if the original game is a draw, whereas Jill wins if the original
game does not end in a draw. (Hint: Jill has a winning strategy, no matter
who goes first.)
5. Regular Tic-Tac-Toe except that neither player can use the center square
on the player’s first move.
6. Regular Tic-Tac-Toe except that neither player can occupy the center
square unless it is a final winning move.
7. Austin’s Wild Tic-Tac-Toe (Gardner [35, Chap. 12]) This game is played
like ordinary Tic-Tac-Toe except that each player can mark either a cross
(X) or a nought (O) in any unoccupied square. The winner is the first
player to complete a line (horizontal, vertical, or diagonal) of three marks
of the same kind. That player did not have to put in all three marks,
only the last one. (Hint: One of the players has a winning strategy.)
8. Ancient Tic-Tac-Toe. In this game, played in ancient China, Greece,
and Rome, each player has only three of that player’s marks. The play-
ers move alternately until all six marks have been placed. Thereafter,
the player’s alternately move one of their marks to a horizontally or ver-
tically adjacent unoccupied square. The first player to have that player’s
three marks all on one line (horizontal, vertical, or diagonal) wins. Find
a winning strategy for the first player.
9. 4 × 4 Tic-Tac-Toe.
Exercise 1.14 Find winning strategies in the following games.
1. A penny is placed on one of the squares on an edge of a checkerboard,
and then moved alternately by players A and B. Each move is either
forward toward the opposite edge or sideways, one square at a time.
Combinatorial games 11
It is forbidden to make a move that returns the penny to a position
occupied earlier in the game. The winner is the one who pushes the
penny into a square on the opposite edge for the first time. Player B
places the penny at the start and then player A has the first move.
2. Same as (1) except that the one who pushes the penny into a square on
the opposite edge for the first time loses.
Exercise 1.15 Show that, if we extend the notion of a combinatorial game to
allow more than two players, then the fundamental theorem is no longer true.
(Hint: Find a suitable three-person game in which no player has a non-losing
strategy.)
We shall now examine a few special combinatorial games, the analysis
of which is not only far from obvious but also involves the use of various
mathematical ideas in quite unexpected ways.
1.3 Nim
At the beginning of a game of Nim4 , there are one or more piles of objects.
Different piles may contain different numbers of objects. The players alter-
nately remove one or more objects from one of the piles (all of the objects in
the pile may be taken). The last person to move wins.
It is clear that there are many different forms of Nim, depending upon
the number of piles and the number of objects in each pile. Since draws are
impossible in Nim, one of the players must have a winning strategy. Our
analysis will apply to all forms, although which player has a winning strategy
will depend upon the particular form of Nim.
Let us look at an especially simple form of Nim:
First pile: | |
Second pile: | |
Here, the second player B has the following winning strategy.
1. If the first player A takes both objects from a pile, then B removes both
objects from the other pile. Then B wins.
2. If A takes one object from a pile, then B takes one object from the other
pile. We would then have:
First pile: |
Second pile: |
4 The name “Nim” may be derived from the German word “Nimm” or the archaic English
word “Nim,” meaning “Take!”. The game probably is of Chinese origin. The name Nim was
coined by C. L. Bouton, a professor of mathematics at Harvard University, who published
in 1901 the first successful analysis of Nim, along the lines which we follow here.
12 Introducing Game Theory and its Applications, Second Edition
A must now take one object from one of the piles. Then B removes the
last object and wins.
Exercise 1.16 Find a winning strategy in the following fairly simple form of
Nim:
First pile: | | |
Second pile: | |
Third pile: | | |
Our explanation of the winning strategies in Nim will employ a simple
arithmetic fact, namely, that every positive integer can be represented in one
and only one way as a sum of distinct powers of 2.
Examples
1 = 20
2 = 21
3 = 2 + 1 = 21 + 20
4 = 22
5 = 4 + 1 = 22 + 20
..
.
13 = 8 + 4 + 1 = 23 + 22 + 20
..
.
38 = 32 + 4 + 2 = 25 + 22 + 21
For each number, the corresponding sum of distinct powers of 2 is called
its binary decomposition. Thus, the binary decomposition of 22 is 24 + 22 + 21 .
Exercise 1.17 Find the binary decompositions of the numbers from 8
through 12 and the numbers from 14 to 20.
Imagine a finite collection of piles of objects. For each pile, write down the
binary decomposition of the number of objects in that pile. The collection of
piles is said to be balanced if each of the powers of 2 occurs an even number
of times in the binary decompositions of all the piles; otherwise, the collection
is said to be unbalanced.
Example A
First pile: | | | | 4 = 22
Second pile: | | | 3= 21 + 20
Third pile: | | | | 4=2 2
Fourth pile: | | 2= 21
Combinatorial games 13
In this case, there are four piles. The binary decompositions have been written
so that the same powers of 2 occur in one column. It is clear that this collection
is unbalanced, since 20 occurs an odd number of times (namely, once).
Example B
| | | 3= 21 + 20
| | | | | 5=22
+ 20
| 1= 20
| | | 3= 2 + 20
1
| | | | 4=22
This collection is balanced. (22 occurs twice, 21 occurs twice, and 20 occurs
four times.)
Notice the following facts about balanced and unbalanced collections.
(1) If a player confronts a balanced collection, the new collection after that
player moves must be unbalanced. This follows from the fact that
exactly one pile can be changed in each move. The new binary decom-
position for that pile differs from that of the old pile by the presence or
absence of at least one power of 2, say 2j . Since 2j previously occurred
an even number of times in the balanced collection, it must now occur
an odd number of times. Hence, the new collection is unbalanced.
(2) If a player confronts an unbalanced collection, that player can make
a move in such a way that the new collection is balanced. See
Examples C and D.
Example C
Consider the unbalanced collection:
| | | 3 = 21 + 20
| | 2 = 21
| | | 3 = 21 + 20
Here, 21 occurs an odd number of times. Remove two objects from the first
pile, obtaining the balanced collection:
| 1= 20
| | 2=2 1
| | | 3 = 21 + 20
Example D
Consider the unbalanced collection:
| | | | | 5 = 22 + 20
| | | 3= 2 + 20
1
| | | 3= 21 + 20
| | | 3= 21 + 20
Here, both 22 and 21 occur an odd number of times. It suffices to take
away 22 from the first pile and add 21 to the first pile. Thus, the new first pile
14 Introducing Game Theory and its Applications, Second Edition
should contain 21 + 20 objects. Hence, we should reduce the first pile from 5
objects to 3 objects:
| | | 3 = 21 + 20
| | | 3 = 21 + 20
| | | 3 = 21 + 20
| | | 3 = 21 + 20
This new collection is balanced.
To prove assertion (2) above, consider an unbalanced collection and let 2k
be the highest power of 2 occurring an odd number of times in the binary
decompositions of its piles. Find a pile P having 2k in its binary decompo-
sition. For each power of 2 occurring an odd number of times in the binary
decompositions of the piles of the collection, add that power to the binary
decomposition of P if it is missing and drop it if it is present. The resulting
number is always smaller than the number of objects originally in the pile P,
since we dropped 2k and added at most 2k−1 +2k−2 +· · ·+2+1, which is equal
to 2k − 1.5 We can obtain this new number by removing a suitable number of
objects from the pile P. The resulting collection is balanced.
Example 1.18 Consider the unbalanced collection:
| | | | | | | | 8 = 23
| | | | | | | | | | 10 = 23 + 21
| | | | | | 6 = 2 2
+ 21
| | | 3 = 21 + 20
Observe that 22 is the highest power of 2 occurring an odd number of
times. Choose a pile in which 22 occurs. In this case, there is only one such
pile, the third pile. Change 22 + 21 to 20 , that is, remove 5 objects from the
third pile. (We have dropped 22 and 21 and we have added 20 , since these
powers of 2 occur an odd number of times in the binary decompositions of the
collection.) The new collection is
| | | | | | | | 8 = 23
| | | | | | | | | | 10 = 23 + 21
| | | | | | 1 = 20
| | | 3 = 2 1
+ 20
This collection is balanced.
By virtue of (1) and (2), it is clear that, if a player X is presented with an
unbalanced collection, player X always can ensure that his opponent Y never
produces a balanced collection after any of Y’s future moves. In fact, if X
5 Let S = 2k−1 + 2k−2 + · · · + 2 + 1. Multiplying both sides by 2, we get 2S = 2k + 2k−1 +
· · · + 22 + 2. Now, subtracting the first equation from the second, we obtain S = 2k − 1
Combinatorial games 15
first produces a balanced collection, then Y must turn it into an unbalanced
collection. Then X can again produce a balanced collection, and so on. But
this means that X must win. For, after each of Y’s moves, the collection is
unbalanced and there must be some objects remaining.
Thus, if the original collection is unbalanced, then the first player A has
a winning strategy. On the other hand, if the original collection is balanced,
then the second player B has a winning strategy. (Because A’s first move must
produce an unbalanced collection.)
Exercise 1.19 For each of the following forms of Nim, determine which player
has a winning strategy. Then practice applying that strategy by playing the
game several times.
1. Piles of 3, 5, and 3 objects.
2. Piles of 3, 4, 4, 5, and 6 objects.
Exercise 1.20 You are the first player A in the following game of Nim:
| | | | | | | | |
| | | | | | | |
| | | | | |
| | |
| | |
What should your first move be?
Exercise 1.21 If there are just two piles in Nim, show that the second player
has a winning strategy if and only if the two piles contain the same number
of objects.
Exercise 1.22 If there are 10 objects distributed among three piles, but you
do not know the number of objects in each pile, would you prefer to go first
or second in a game of Nim? Why?
Exercise 1.23 Trick based upon binary decompositions.
Peter asks someone to think of any number between 1 and 15, and then
to tell him in which of the columns below the number appears.
8 4 2 1
9 5 3 3
10 6 6 5
11 7 7 7
12 12 10 9
13 13 11 11
14 14 14 13
15 15 15 15
Peter adds the numbers at the top of the columns in which the unknown num-
ber occurs. The sum is the unknown number. (For example, if the unknown
number is 11, Peter will be told that it occurs in the first, third, and fourth
columns. Therefore, the sum 8 + 2 + 1 is the unknown number.)
16 Introducing Game Theory and its Applications, Second Edition
1. Why does this trick work?
2. How would Peter extend this trick in order to guess any unknown num-
ber between 1 and 31?
Exercise 1.24 Prove that every positive integer has one and only one bi-
nary decomposition. (Hint: Prove the existence of a binary decomposition by
mathematical induction. To prove uniqueness, assume that there are two bi-
nary decompositions for the same number and obtain a contradiction by using
the formula 2k + 2k−1 + · · · + 2 + 1 = 2k+1 − 1.)
Reverse Nim
This is the same as Nim except that the last person to move loses. The
winning strategy for Nim does not seem to be applicable in Reverse Nim.
However, we shall see that it can be modified to yield a winning strategy in
Reverse Nim.
Example 1.25
1. Consider the simple case of two piles, each with one object. In Nim,
the second player has a winning strategy, whereas the first player has a
winning strategy in Reverse Nim. Of course, in both cases, the winning
strategy is the only possible strategy.
2. Consider three piles of one object each. In Reverse Nim, the second
player has an automatic winning strategy.
3. Consider the game with the following three piles:
| | |
|
|
The first player has a winning strategy in Reverse Nim. Remove two
objects from the first pile, reducing the game to that of (2).
Now let us consider an arbitrary game of Reverse Nim. A pile with only
one object will be called unary. A pile with more than one object will be called
multiple. We can divide the analysis of Reverse Nim into three cases.
Case 1. All piles are unary. It is obvious that the first player has a winning
strategy if and only if there are an even number of piles. The play of the game
is automatic.
Case 2. Exactly one pile is multiple. Then the first player has a winning
strategy. (Why?)
Case 3. At least two piles are multiple. In this case, we shall show that
the player having the winning strategy in ordinary Nim also has a winning
strategy in Reverse Nim.
Subcase 3a. Assume that the initial collection of piles is balanced. The
second player then has a winning strategy, as in Nim. Player A’s first move
produces an unbalanced collection. Then B restores a balanced collection,
and so on. Eventually, one of the players must leave a collection having only
Combinatorial games 17
one multiple pile. But a collection with only one multiple pile is necessarily
unbalanced. (For only the multiple pile contributes a power of 2 different from
20 .) Thus, it must be player A who has left a collection with only one multiple
pile. It is then B’s turn to move and, by Case 2, B has a winning strategy.
Subcase 3b. Assume that the initial collection of piles is unbalanced.
Then the first player A has a winning strategy, as in Nim. First, A restores
a balanced collection. B then must produce an unbalanced collection, and so
on. As in Subcase 3a, the player who first produces a collection with only
one multiple pile is the player who always produces unbalanced collections,
namely B. Thus, player A eventually confronts a collection of piles with only
one multiple pile and, by Case 2, A has a winning strategy.
Exercise 1.26 Determine the winner and the winning strategy in the follow-
ing examples of Reverse Nim.
1. Piles of 3, 5, and 3 objects.
2. Piles of 3, 4, 2, 5, and 6 objects.
3. Piles of 1, 1, 1, and 2 objects.
4. Piles of 1, 3, 5, and 7 objects. (This form was played in the French movie
Last Year at Marienbad [108].)
Exercise 1.27 Analyze the game which is the same as Nim except that the
second player B is allowed to dictate the first move of player A.
Exercise 1.28 Analyze the game which is the same as Reverse Nim except
that the second player B is allowed to dictate the first move of player A.
1.4 Hex and other games
The combinatorial games discussed so far have either been trivial (Tic-Tac-
Toe) or relatively simple to analyze (Nim), or extremely complicated (chess
or checkers). There is one family of games, however, that combines simplicity
and depth in such a way as to make it extremely attractive to mathematicians.
This kind of game was invented in 1942 by the Danish engineer, inventor, and
poet, Piet Hein. It was marketed in Denmark under the name Polygon and
was brought out in America under the name Hex by Parker Brothers in 1952.
Hex is played on an n × n board composed of hexagons. For n = 2, 3, 4, 5,
the boards are shown in Figure 1.2. To describe the boards and moves made
on them, we shall number the hexagons on an n × n board from 1 to n2 ,
proceeding along each row from left to right, and then down the rows. Once
this numbering procedure is adopted, it is not necessary and even easier to
omit the hexagons themselves and just write their numbers. Thus, the Hex
boards in Figure 1.2 can be drawn as in Figure 1.3.
18 Introducing Game Theory and its Applications, Second Edition
FIGURE 1.2: Hex boards of different sizes.
1 2 3 4 5
1 2 3 4
1 2 3 6 7 8 9 10
1 2 5 6 7 8
4 5 6 11 12 13 14 15
3 4 9 10 11 12
7 8 9 16 17 18 19 20
13 14 15 16
21 22 23 24 25
2×2 3×3 4×4 5×5
FIGURE 1.3: Cell numbers on Hex boards of different sizes.
The opponents are called White and Black, with White moving first. The
top and bottom rows are called White’s edges, and the two sides are called
Black’s edges. (The corners are shared by White and Black.) Thus, White
takes the North and South edges, while Black takes the East and West edges.
White and Black alternately place their tokens (say, stones or checkers) inside
previously unoccupied hexagons. White uses white tokens and Black uses black
tokens. White’s objective is to construct a continuous chain of white tokens
connecting White’s edges (that is, the top and bottom of the board), whereas
Black tries to construct a continuous chain of black tokens connecting Black’s
edges (that is, the two sides of the board). Adjacent tokens in a chain must
be in hexagons having a common edge. A chain need not be straight. Two
examples of winning positions in 5 × 5 Hex are illustrated in Figure 1.4.
In these diagrams, we have omitted the number of a hexagon when it is
occupied by a token. The black circles represent Black tokens and the empty
circles represent White circles.
1 2 3 • • 1 ◦ 3 4 5
◦ ◦ ◦ • 10 • • ◦ 9 10
• • ◦ • 15 11 12 • ◦ •
16 ◦ • • 20 16 17 ◦ ◦ •
21 22 ◦ ◦ 25 21 22 ◦ 24 25
Black wins White wins
FIGURE 1.4: Examples of completed Hex boards.
Combinatorial games 19
Let us look at the simplest forms of Hex. We start with the following 2 × 2
empty Hex board.
1 2
3 4
If White moves into hexagon 1, then White will win. For, no matter where
Black moves next, White will be able to move into either hexagon 3 or
hexagon 4. Thus, White has a winning strategy. (Notice that White also could
win by moving into 4 first and then into either 1 or 2, depending on Black’s
first move. However, if White makes the mistake of moving into 2 or 3 first,
then Black can force a win.)
Now consider the following 3 × 3 empty Hex board.
1 2 3
4 5 6
7 8 9
White again has a winning strategy. First, White moves into 1.
◦ 2 3
4 5 6
7 8 9
The rest of the strategy divides into cases, depending upon Black’s response.
Case 1. Assume Black’s first move is into any hexagon not in the bottom row.
White then moves into either 4 or 5. (At least one of 4 or 5 was still
unoccupied.) Say, White moves into 4:
◦ 2 3
◦ 5 6
7 8 9
(Black’s first move is not shown. Black has a token in one of the hexagons
2, 3, 5, 6.) Then, after Black’s next move, White will be able to move
into either 7 or 8, winning the game.
Case 2. Assume Black’s first move is into 7.
◦ 2 3
4 5 6
• 8 9
Then White moves into 5.
◦ 2 3
4 ◦ 6
• 8 9
After Black’s next move, 8 or 9 is still unoccupied. White moves into 8
or 9, winning the game.
20 Introducing Game Theory and its Applications, Second Edition
Case 3. Assume Black’s first move is into 9.
◦ 2 3
4 5 6
7 8 •
Then White moves into 4.
◦ 2 3
◦ 5 6
7 8 •
After Black’s next move, 7 or 8 is still unoccupied. White moves into 7
or 8, winning the game.
Case 4. Assume Black’s first move is into 8.
2 3
◦
4 5 6
7 • 9
White can then force the following sequence of moves, in the sense that,
if Black does not make the indicated move, White wins easily in one
more move. First, White moves into 9. Black is forced to move into 5.
Then White moves into 6. No matter what Black’s next move is, White
can move into either 2 or 3, winning the game.
Exercise 1.29
1. In 3 × 3 Hex, show that White has winning strategies beginning in any
hexagon on the principal diagonal (that is, 1, 5, or 9). We have verified
this above for hexagon 1.
2. Show that, in 3 × 3 Hex, White also has winning strategies beginning
with hexagons 4 or 6. (The analysis is quite tedious.)
3. Show that, in 3 × 3 Hex, if White’s first move is into hexagons 2, 3, 7,
or 8, then Black, from that point on, has a winning strategy.
Exercise 1.30 Consider 4 × 4 Hex. (See Figure 1.3.)
1. Show that White has a winning strategy, starting anywhere on the prin-
cipal diagonal, that is, in any of the hexagons 1, 6, 11, or 16.
2. Show that, if White starts anywhere but on the principal diagonal, Black
can ensure a win.
Exercise 1.31 Find a winning strategy for White in 5 × 5 Hex.
To start our general analysis of Hex, we note two facts:
1. It is impossible for both White and Black to win in a game of Hex. This
is clear because, if White has a chain connecting the North and South
edges, this prevents a chain from being formed between the East and
West edges.
References
Araujo, F. C. and A. B. Leoneti, “Game Theory and 2 × 2 Strategic Games Applied for Modeling
Oil and Gas Industry Decision-Making Problems,” Pesquisa Operacional, Vol. 38 (3), 2018, pp.
479–497. https://www.scielo.br/pdf/pope/v38n3/1678-5142-pope-38-03-479.pdf (Cited on page
238.)
Aumann, R. J., Lectures on Game Theory, Westview Press, 1989. (Cited on page 161.)
Aumann, R. J. and M. Maschler, “The bargaining set for cooperative games,” in Advances in
Game Theory, Vol. 52, Annals of Mathematics Studies, Princeton University Press, 1964, pp.
443–476. (Cited on page 168.)
Averbach, B. and D. Chein, Mathematics: Problem Solving Through Recreational Mathematics,
Freeman, 1980. (Cited on page 38.)
Avis, D. et al., “Enumeration of Nash Equilibria for Two-Player Games,” Economic Theory,
2010, 42, pp. 9–37. (Cited on page.)
Axelrod, R., The Evolution of Cooperation, Basic Books, 1984. (Cited on page.)
Axelrod, R., The Complexity of Cooperation, Princeton University Press, 1997. (Cited on page.)
R. Axelrod and Lisa D'Ambrosio, Annotated Bibliography on The Evolution of Cooperation,
http://www-personal.umich.edu/axe/research/Evol_of_Coop_Bibliography.htm (accessed 18
Nov 2020). (Cited on page.)
Axelrod, R. and D. Dion, The further evolution of cooperation, Science, Vol. 242, 1988, pp.
1385–1390. (Cited on page.)
Banzhaf, J. F. III, “Weighted voting does not work: A mathematical analysis,” Rutgers Law
Review, Vol.19, 1965, pp. 317–343. (Cited on page.)
Banzhaf Power Index,
https://people.math.binghamton.edu/fer/courses/math130/ZIS_Spr14/chapter1/Banzhaf.html
(accessed 20 Oct 2023). (Cited on page.)
Barron, E. N., Game Theory: An Introduction, Wiley-Interscience, John Wiley, 2008. (Cited on
page.)
Beasley, J. D., The Mathematics of Games, Oxford University Press, 1989. (Cited on page.)
Berlekamp, E. R., J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays:
Games in General, Vol. 1, Academic, 1982. (2nd ed., A.K. Peters, 2001). (Cited on page.)
Berlekamp, E. R., J. H. Conway, and R. K. Guy, Winning Ways for Your Mathematical Plays:
Games in Particular, Vol. 2, Academic, 1982. (2nd ed., A.K. Peters, 2003). (Cited on page.)
Bland, R. G., “New finite pivoting rules for the simplex method,” Mathematics of Operations
Research, Vol. 2, 1977, pp. 103–107. (Cited on page.)
Böhm-Bawerk, E. von, Positive Theory of Capital (translation of 1891 German edition), Stechert,
1923. (Cited on page.)
Brams, S. J., “Theory of Moves,” Cambridge University Press, 1994. (Cited on page.)
Brams, S. J., “Game Theory and the Humanities: Bridging Two Worlds,” MIT Press, 2012.
(Cited on page.)
Bruns, B. R., “Names for Games: Locating 2×2 Games,” Games, 2015, pp. 495–520.
https://www.mdpi.com/2073-4336/6/4/495 (Cited on page.)
Burger, E., Introduction to the Theory of Games, Prentice-Hall, 1963. (Cited on page 38.)
Conway, J. H., On Numbers and Games, Academic, 1976. (Cited on page.)
Cornelius, M. and A. Parr, What's Your Game?, Cambridge University Press, 1991. (Cited on
page.)
Cowen, R. and R. Dickau, “Playing games with Mathematica,” Mathematics in Education and
Research, Vol. 7, 1998, pp. 5–10. (Cited on page.)
Davenport, W. C., “Jamaican fishing village,” Papers in Caribbean Anthropology, Vol. 59, Yale
University Press, 1960. (Cited on page.)
Dawkins, R., The Selfish Gene, Oxford University Press, 4th edition, 2014. (Cited on page 74.)
DeCanioa, S. J. and A. Fremstad, “Game theory and climate diplomacy,” Ecological Economics,
85, 2013, pp. 177–187. (Cited on page.)
Dickhaut, J. and T. Kaplan, “A program for finding Nash equilibria,” The Mathematica Journal,
Vol. 1, 1991, pp. 87–93. (Cited on page.)
Dresher, M., The Mathematics of Games of Strategy: Theory and Applications, Dover, 1981.
Dudeney, H., The Canterbury Puzzles, Dover, 1958. (Cited on page.) (Cited on page.)
Eaves, B. C., “The linear complementarity problem,” Management Science, Vol. 17, 1971, pp.
612–634. (Cited on page.)
Edgeworth, F. Y., Mathematical Psychics, Kegan Paul, 1881. (Cited on page.)
Gale, D., “A curious Nim-type game,” American Mathematical Monthly, Vol. 81, 1974, pp.
876–879.
Gambit: Software Tools for Game Theory – Game representation formats http://www.gambit-
project.org/gambit13/formats.html (accessed 1 Dec 2023). (Cited on page.)
Gardner, M., Sixth Book of Mathematical Games from Scientific American, W. H. Freeman,
1971. (Cited on page.)
Gillies, D. B., “Solutions to general non-zero-sum games,” in Contributions to the Theory of
Games, Vol. 4, Annals of Mathematics Studies No. 40, Princeton University Press, 1959, pp.
47–85. (Cited on page.)
GLPK (GNU Linear Programming Kit), https://www.gnu.org/software/glpk/ (accessed 19 Oct
2023). (Cited on page.)
Grundy, P. M., “Mathematics and games,” Eureka, Vol. 2, 1939, pp. 6–8. (Cited on page.)
Halmos, P. R., “The legend of John von Neumann,” American Mathematical Monthly, Vol. 80,
1973, pp. 382–394. (Cited on page.)
Jiang, A. X. and K. Leyton-Brown, “Bayesian Action-Graph Games,” NIPS (Neural Information
Processing Systems), 2010, https://www.cs.ubc.ca/jiang/papers/BAGG.pdf (accessed 23 Oct
2023). (Cited on page.)
Jiang, A. X. et al., “Action-Graph Games,” Games and Economic Behavior, January 2011,
Volume 71, Issue 1, pp. 141–173. https://www.cs.ubc.ca/jiang/papers/AGG.pdf (accessed 23
Oct 2023). (Cited on page.)
Jones, A. J., Game Theory: Mathematical Methods of Conflict, Ellis Horwood, 1980. (Cited on
page.)
Julia homepage, https://julialang.org (accessed 1 Jan 2023). (Cited on page.)
Kakutani, S., “A generalization of Brouwer's fixed point theorem,” Duke Mathematical Journal,
Vol. 8 (3), 1941, pp. 457–459. (Cited on page.)
Knight, V., J. Campbell, “Nashpy: A Python library for the computation of Nash equilibria,”
Journal of Open Source Software, 2018, https://www.theoj.org/joss-
papers/joss.00904/10.21105.joss.00904.pdf (accessed 9 Feb 2021). (Cited on page.)
Kuhn, H. W. (Ed.), Classics in Game Theory, Princeton University Press, 1997. (Cited on page.)
Kuhn, H. W., “A Simplified Two-Person Poker,” in Contributions to the Theory of Games, (AM-
24), Volume I, H. W. Kuhn and A. W. Tucker (Eds.), pp. 97–104, Princeton: Princeton, (1951).
(Cited on page.)
Kwon, C., Julia Programming for Operations Research, Chapter 2: Simple Linear Optimization,
(2020), https://www.softcover.io/read/7b8eb7d0/juliabook2/linear (accessed 19 Oct 2023).
(Cited on page.)
Lanctot, M. et al., OpenSpiel: A Framework for Reinforcement Learning in Games,
https://arxiv.org/pdf/1908.09453.pdf (accessed 21 Oct 2023). (Cited on page.)
Lemke, C. E. and J. T. Howson, Jr., “Equilibrium points in bimatrix games,” SIAM Journal of
Applied Mathematics, Vol. 12, 1964, pp. 413–423. (Cited on page.)
scipy.optimize.linprog,
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html (accessed 19
Oct 2023). (Cited on page.)
lp_solve reference guide menu, http://lpsolve.sourceforge.net/5.5/ (accessed 27 Nov 2020).
(Cited on page.)
Loomis, L. H., “On a theorem of von Neumann,” Proceedings of the National Academy of
Sciences, USA, Vol. 32, 1946, pp. 213–215. (Cited on page.)
Lucas, W. F., “A game with no solution,” Bulletin of the American Mathematical Society, Vol. 74,
1968, pp. 237–239. (Cited on page.)
Lucas, W., “Measuring power in weighted voting systems,” in Political and Related Models, S.
Brams, W. Lucas, and P. Straffin (Eds.), Springer, 1983, Chapter 9. (Cited on page.)
Lucas, W. F. and M. Rabie, “Games with no solutions and empty core,” Mathematics of
Operations Research, Vol. 7, 1982, pp. 491–500. (Cited on page.)
Luce, R. D. and H. Raiffa, Games and Decisions, John Wiley, 1957. (Cited on page.)
Mangasarian, O. L., “Equilibrium points in bimatrix games,” Journal of the Society for Industrial
and Applied Mathematics Vol. 12, 1964, pp. 778–780. (Cited on page.)
Wolfram Mathematica, https://www.wolfram.com/mathematica (accessed 1 Jan 2021). (Cited on
page.)
McKelvey, R. D. and A. McLennan, “Computation of equilibria in finite games,” Handbook of
Computational Economics, Vol. 1, 1996, pp. 87–142. (Cited on page.)
McKinsey, J. C. C., Introduction to the Theory of Games, McGraw-Hill, 1952. (Dover Reprint,
2003.) (Cited on page.)
Metzner, A. and D. Zwillinger, Kuhn Poker with Cheating and Its Detection,
https://arxiv.org/pdf/2011.04450.pdf, 9 Nov 2020. (Cited on page.)
Milnor, J., “A Nobel Prize for John Nash,” The Mathematical Intelligencer, Vol. 17, 1995, pp.
14–15. (Cited on page.)
Mitchell, S. et al., PuLP: A Linear Programming Toolkit for Python, 5 September 2011,
https://optimization-online.org/wp-content/uploads/2011/09/3178.pdf (accessed 16 March 2023).
(Cited on page.)
Moore, E. H., “A generalization of the game called Nim,” Annals of Mathematics, Vol. 11, 1910,
pp. 93–94. (Cited on page.)
Morris, P., Introduction to Game Theory, Springer, 1994. (Cited on page.)
Nachbar, J. H., “Evolution in the finitely repeated Prisoner's dilemma,” Journal of Economic
Behavior and Organization, Vol. 19, 1992, pp. 307–326. (Cited on page.)
Nasar, S. A Beautiful Mind. Simon & Schuster, 1998. (Cited on page.)
Nash, J. F. Jr., “Equilibrium points in n-person games,” Proceedings of the National Academy of
Sciences, Vol. 36, 1950, pp. 48–49. (Reprinted in Kuhn [46].) (Cited on page.)
Nash, J. F. Jr., “The bargaining problem,” Econometrica, Vol. 18, 1950, pp. 155–162. (Reprinted
in Kuhn [46].) (Cited on page.)
Nash, J. F. Jr., “Non-cooperative games,” Annals of Mathematics, Vol. 54, 1951, pp. 286–295.
(Reprinted in Kuhn [46].) (Cited on page.)
Nash, J. F. Jr., “Two-person cooperative games,” Econometrica, Vol. 21, 1953, pp. 128–140.
(Cited on page.)
Welcome to Nashpy's documentation!, https://nashpy.readthedocs.io/en/latest/ (accessed 9 Feb
2021). (Cited on page.)
Octave, GNU Octave, https://www.gnu.org/software/octave/ (accessed 27 Nov 2020). (Cited on
page.)
Octave, Linear Programming, https://octave.org/doc/v4.4.1/Linear-Programming.html,
(accessed 27 Nov 2020). (Cited on page.)
Owen, G., Game Theory, W. B. Saunders, 1968; (2nd ed.) Academic Press, 1982. (Cited on
page.)
Pierce, J. R., Symbols, Signals, and Noise, Harper, 1961. (Cited on page.)
Powers, M. R., M. Shubik, and W. Wang, “Expected Worth for 2×2 Matrix Games with Variable
Grid Sizes,” Cowles Foundation Discussion Paper No. 2039R, Cowles Foundation for Research
in Economics, Yale University, October 2016,
https://cowles.yale.edu/sites/default/files/files/pub/d20/d2039-r.pdf (accessed 31 Jan 2021).
(Cited on page.)
Python, https://www.python.org/, (accessed 27 Nov 2020). (Cited on page.)
PuLP, https://pypi.org/project/PuLP/ (accessed 27 Nov 2020). (Cited on page.)
R, The R Project for Statistical Computing, https://www.r-project.org/ (accessed 27 Nov 2020).
(Cited on page.)
Rapoport, A., Two-Person Game Theory: The Essential Ideas, University of Michigan Press,
1966. (Cited on page.)
Rosenmüller, J., “On a generalization of the Lemke–Howson algorithm to non-cooperative N-
person games,” SIAM Journal of Applied Mathematics, Vol. 21, 1971, pp. 73–79. (Cited on
page.)
Sallan, J. M., O. Lordan, and V. Fernandez, Modeling and Solving Linear Programming with R,
https://upcommons.upc.edu/bitstream/handle/2117/78335/Modeling+and+Solving+Linear+Progr
amming+with+R.pdf?sequence=1 (accessed 27 Nov 2020). (Cited on page.)
Savani, Rahul and Theodore L. Turocy, Gambit: The package for computation in game theory,
Version 16.1.0, 2023, http://www.gambit-project.org (accessed 19 Oct 2023). (Cited on page.)
Savani, R. and B. von Stengel “Game Theory Explorer: Software for the Applied Game
Theorist,” Computational Management Science, Vol. 12, 2015, pp. 5–33,
http://www.maths.lse.ac.uk/Personal/stengel/TEXTE/largeongte.pdf (accessed 1 Jan 2023).
(Cited on page.)
Schaeffer, J. et al., “Checkers Is Solved,” Science, 14 Sep 2007, Vol 317, Issue 5844, pp.
1518–1522. (Cited on page.)
Schuh, F., The Master Book of Mathematical Recreations, Dover, 1968 (Cited on page.)
Shapley, L. S., “A value for n-person games,” Contributions to the Theory of Games II,
Princeton University Press, 1953, pp. 307–317 (Reprinted in Kuhn [46].) (Cited on page.)
Shapley, L. S. and M. Shubik, “A method for evaluating the distribution of power in a committee
system,” The American Political Science Review, Vol. 48, 1954, pp. 787–792. (Cited on page.)
Silverman, D. L., Your Move, Kaye and Ward, 1971. (Cited on page.)
Sprague, R. P., “Ueber mathematische Kampfspiele,” Tohoku Mathematical Journal, Vol. 41,
1935–36, pp. 438–444. (Cited on page.)
Straffin, P., Topics in the Theory of Voting, Birkhauser, 1980. (Cited on page.)
Straffin, P., “Power indices in politics,” in Political and Related Models, S. Brams, W. Lucas, and
P. Straffin (Eds.), Springer, 1983, Chapter 11. (Cited on page.)
Straffin, P. D., Game Theory and Strategy, Mathematical Association of America, 1993. (Cited
on page.)
Sultan, A., “How to simply show that there are 78 ‘strict ordinal’ 2 × 2 game matrices,”
https://math.stackexchange.com/questions/622892/how-to-simply-show-that-there-are-78-strict-
ordinal-2x2-game-matrices (accessed 30 Jan 2021). (Cited on page.)
Szafron, D. et al., “A Parameterized Family of Equilibrium Profiles for Three-Player Kuhn
Poker,” May 2013, In Ito et al. (eds.). Proceedings of the 12th International Conference on
Autonomous Agents and Multiagent Systems (AAMAS 2013), Saint Paul, Minnesota. (Cited on
page.)
Tassa, Y., LCP (contributed Matlab code), https://jp.mathworks.com/matlabcentral/mlc-
downloads/downloads/submissions/36911/versions/3/previews/LCP.m/index.html, (accessed 8
Feb 2021). (Cited on page.)
Vajda, S., Mathematical Games and How to Play Them, Ellis Horwood, 1992. (Cited on page.)
Ville, J., “Sur la théorie générale des jeux où intervient l'habilité des joueurs,” in Applications des
Jeux de Hasard, Vol. 4, E. Borel et al. (1925–1939), 1938, Fasc. 2, pp. 105–113. (Cited on
page.)
von Neumann, J., “Zur Theorie der Gesellschaftsspiele,” Math. Annalen, Vol. 100, 1928, pp.
295–320. (English translation by S. Bargmann, “On the theory of games of strategy,” in
Contributions to the Theory of Games, Vol. 4, A. A. Tucker and R. D. Luce (Eds.), Annals of
Mathematical Studies, Vol. 40, Princeton University Press, 1959.) (Cited on page.)
von Neumann, J. and O. Morgenstern, Theory of Games and Economic Behavior, Princeton
University Press, 1944. (60th Anniversary Edition, 2004.) (Cited on page.)
Weibull, J. W., Evolutionary Game Theory, The MIT Press, 1995. (Cited on page.)
Wikimedia Commons contributors, “File:Rock Paper Scissors Lizard Spock.JPG,” Wikimedia
Commons, the free media repository,
https://commons.wikimedia.org/w/index.php?title=File:Rock_Paper_Scissors_Lizard_Spock.JP
G (accessed 10 Jan 2021). (Cited on page.)
Wikimedia Commons contributors, “File:Rock paper scissors lizard spock.svg,” Wikimedia
Commons, the free media repository,
https://commons.wikimedia.org/w/index.php?title=File:Rock_paper_scissors_lizard_spock.svg
(accessed 10 Jan 2021). (Cited on page.)
https://commons.wikimedia.org/wiki/File:Pierre_ciseaux_feuille_l
Wikimedia Commons contributors, “File:Pierre ciseaux feuille lizard spock aligned.svg,”
Wikimedia Commons, the free media repository, (accessed 25 Jan 2021). (Cited on page.)
Wikipedia contributors, “Game complexity,” Wikipedia, The Free Encyclopedia,
https://en.wikipedia.org/w/index.php?title=Game_complexity oldid=998680851 (accessed 8 Jan
2021).
Wikipedia contributors, “Last Year at Marienbad,” Wikipedia, The Free Encyclopedia,
https://en.wikipedia.org/w/index.php?title=Last_Year_at_Marienbad oldid=986153351
(accessed 30 Oct 2020). (Cited on page.)
Wikipedia contributors, “The Big Bang Theory,” Wikipedia, The Free Encyclopedia,
https://en.wikipedia.org/w/index.php?title=The_Big_Bang_Theory oldid=999119883 (accessed
10 Jan 2021). (Cited on page.)
Wilson, R., “Computing equilibria of n-person games,” SIAM Journal of Applied Mathematics,
Vol. 21, 1971, pp. 80–87. (Cited on page.)
Wythoff, W. A., “A modification of the game of Nim,” Nieuw Archief voor Wiskunde, Vol. 7, 1907,
pp. 199–202. (Cited on page.)
Yaglom, A. M. and I. M. Yaglom, Challenging Mathematical Problems with Elementary
Solutions, Dover, 1987. (Original Russian edition, Moscow, 1954) (Cited on page.)
Young, S., The Search for GTO: Determining Optimal Poker Strategy Using Linear
Programming, Independent Study Thesis, The College of Wooster, 2017,
https://openworks.wooster.edu/cgi/viewcontent.cgi?article=8559 context=independentstudy.
(Cited on page.)
Zermelo, E., “Ueber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels,”
Proceedings of the Fifth International Congress of Mathematicians, Vol. 2, Cambridge, 1912,
pp. 501–504. (Cited on page.)
Zwicker, W. S., “Playing games with games: the Hypergame paradox,” American Mathematical
Monthly, 1987, pp. 507–514. (Cited on page.)