0% found this document useful (0 votes)
20 views36 pages

Unit V - Updated

The document discusses key concepts in Algorithmic Game Theory, including Iterated Elimination of Strictly Dominated Strategies (IESDS), the Lemke–Howson Algorithm for finding Nash equilibria, and various bargaining scenarios such as the Nash Bargaining Problem. It explains how players can simplify games by eliminating dominated strategies and outlines methods for reaching fair agreements in cooperative settings. Additionally, it covers dynamic learning models like Fictitious Play and Brown–Von Neumann–Nash Dynamics, highlighting their applications in real-world negotiations and resource sharing.

Uploaded by

Uma Maheswari V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views36 pages

Unit V - Updated

The document discusses key concepts in Algorithmic Game Theory, including Iterated Elimination of Strictly Dominated Strategies (IESDS), the Lemke–Howson Algorithm for finding Nash equilibria, and various bargaining scenarios such as the Nash Bargaining Problem. It explains how players can simplify games by eliminating dominated strategies and outlines methods for reaching fair agreements in cooperative settings. Additionally, it covers dynamic learning models like Fictitious Play and Brown–Von Neumann–Nash Dynamics, highlighting their applications in real-world negotiations and resource sharing.

Uploaded by

Uma Maheswari V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Unit V

Iterated elimination of strictly dominated strategies

• Iterated elimination of strictly dominated strategies (IESDS) is a fundamental


concept in Algorithmic Game Theory (AGT) and classical game theory.

• It is a method for simplifying games by successively removing strategies that are


strictly dominated

• i.e. strategies that are always worse than some other strategy, no matter what the
opponents do.
What is a Strictly Dominated Strategy?
Iterated Elimination Procedure
1. Start with the full game and consider all strategies.

2. Identify strictly dominated strategies for any player and eliminate them.

3. Update the game by removing those strategies.

4. Repeat the process until no more strictly dominated strategies can be found.

5. This results in a reduced game, which is often easier to analyze.


Example
Example

strategy M for Player 1: Compare it to a mixture of T and B or individually. It's not strictly
better than any other.

Hold off on Player 1; let's check Player 2 first.


Contd..

Eliminate strategy R for Player 2.


Contd..

So, Eliminate M for player 1


Contd..
Contd..
Interpretation:

Player 2 will definitely play M.

Player 1 now faces a simplified choice: T or B.

T yields 2, B yields 0 (against M).

So, Player 1 chooses T.

Predicted outcome by IESDS: (T, M) → (2, 2)

Only T and B for Player 1, and M for Player 2 remain.


Lemke–Howson Algorithm

• The Lemke–Howson Algorithm is a classic algorithm used in game theory to


find a Nash equilibrium in two-player (bimatrix) games.

• It’s one of the most well-known constructive methods for finding equilibria in
finite strategic-form games
Goal of Lemke–Howson

• Given a two-player game with payoff matrices A and B, the algorithm finds a
Nash equilibrium a pair of mixed strategies (probability distributions over pure
strategies) where no player has an incentive to unilaterally deviate.

Basic Idea
• The Lemke–Howson algorithm is a pivoting method, kind of like the Simplex
algorithm in linear programming.

• It traverses edges of best response polytopes until it lands on a completely labeled


pair of strategies, which corresponds to a Nash equilibrium.
High-Level Steps
1.Convert the game to a Linear Complementarity Problem (LCP).

2.Label each pure strategy of both players with a set of labels representing best responses
and zero probabilities.

3.Drop one label (this is your "missing label") to start the algorithm.

4.Perform complementary pivoting: walk along edges of the best-response polytopes,


switching labels one at a time.

5.Stop when you reach a completely labeled vertex — that is, each label (except the
missing one) appears once.
Nash Equilibrium

• In Algorithmic Game Theory (AGT), a Nash Equilibrium is a fundamental


concept borrowed from classical game theory.

• but applied to games where the players are often agents (which may be
algorithms) and the games can have computational constraints or strategic
interactions over networks, auctions, or other algorithmic environments.
Formal Definition
Examples in AGT

1.Routing Games (e.g., selfish routing in networks):


Each user picks a path to minimize their own delay. A Nash Equilibrium is a
state where no user can switch to a different path and reduce their travel time.

2.Auctions (e.g., Vickrey or GSP auctions):


Players bid for items. Equilibria represent stable bidding strategies.

3.Congestion Games:
Where the cost of a resource increases with more users. These games are known
to always have at least one pure strategy Nash equilibrium.
Fictitious Play

Fictitious Play is a learning process in which

• Players repeatedly play a game.


• Each player assumes opponents are playing stationary strategies (i.e., not
changing).
• At each round, a player best-responds to the empirical frequency of the
opponent's past actions.

It models how bounded-rational players might converge to a Nash Equilibrium


over time.
Fictitious Play Algorithm

1.Initialize: Each player starts with some belief about the other player's strategy.

2.Repeat (at each time step):


1. Compute the empirical distribution of the opponent's past actions.
2. Choose a best response to this empirical distribution.

3.Update: Record the player's action.

4.Repeat until convergence (or until a fixed number of rounds).


Brown–Von Neumann–Nash (BNN) Dynamics

• Brown–Von Neumann–Nash (BNN) Dynamics is a dynamic learning model


used in Algorithmic Game Theory (AGT) to describe how players adjust their
strategies over time in a continuous and smooth manner based on payoff
differences.

• It models adaptive behavior in games without assuming perfect rationality or full


knowledge.
What Is Brown–Von Neumann–Nash (BNN) Dynamics?

• BNN Dynamics describes how players gradually shift probability


mass toward better-performing strategies based on payoff
differences between strategies.

• It models how strategies evolve in population games or multi-agent


systems, guided by marginal advantage rather than pure best
responses.
Formal Definition
Contd..

• Strategies with higher-than-average payoffs grow in popularity.

• Strategies with below-average payoffs do not gain weight (or may


shrink indirectly).

• Unlike Fictitious Play, BNN does not require memory of past play —
just current payoffs.
What is The Nash Bargaining Problem in Algorithmic
Game Theory (AGT)?

The Nash Bargaining Problem models how two (or more) players can
cooperatively negotiate to split a resource (e.g., money, tasks, benefits) fairly,
assuming:

• They both benefit from cooperation rather than fighting.

• If no agreement is reached, each player gets a disagreement payoff (sometimes


called the threat point).

The goal is to find a fair and efficient agreement that both players prefer over
fighting (or quitting).
Formal Definition
Nash Bargaining Solution

The Nash bargaining solution is the point that satisfies:

• Pareto Optimality (no one can be made better off without making
the other worse off),
• Symmetry (if players are symmetric, their payoffs should be equal),
• Independence of Irrelevant Alternatives
• Scale Invariance (changing units doesn’t affect the solution).
Example: Money Splitting
Salary Negotiation
• Context: A company and a candidate negotiate a job offer.

• Bargaining:
• The company wants to hire the candidate for as little salary as possible.
• The candidate wants the highest salary and best benefits.

• Fallback ("disagreement point"):


• If they don't agree, the candidate stays unemployed or takes another offer, and the
company must find another candidate (at some cost).

• Nash bargaining predicts the final salary will depend on how valuable each party is to
the other, and what their outside options are.
Why Transferable Utility Games Matter in AGT

• Many real-world systems (like online platforms, distributed computing, and


markets) involve sharing profits/costs.

• TU models make it easier to design fair and efficient algorithms (since you just
care about dividing value, not more complicated preferences).

• It helps in mechanism design, cooperative AI, and fair resource allocation.


Divorce Settlement

• Context: A divorcing couple needs to divide property, custody, and finances.

• Bargaining:
• Both want the best possible outcome for themselves.

• Fallback:
• If they can't agree, a court will decide — possibly worse for both (expensive and
unpredictable).

• So they are incentivized to negotiate a fair (Pareto optimal) split based on mutual
benefit.
Trade Deals Between Countries

• Context: Two countries negotiate a trade agreement (like tariff reductions).

• Bargaining:
• Each country wants better access to the other’s markets but protect its own
industries.

• Fallback:
• Without a deal, both countries revert to higher tariffs, harming both
economies.

• Again, Nash bargaining suggests the deal reached should balance the mutual gains
they could get compared to no deal.
Mergers and Acquisitions

• Context: Two companies negotiate the terms of a merger or acquisition.

• Bargaining:
• The buyer wants a lower price; the seller wants a higher price.

• Fallback:
• If they don't agree, both miss out on potential synergies and profit.

Negotiations reflect their respective bargaining powers and fallback options (other
buyers/sellers).
Resource Sharing Among Tech Giants

• Context: Suppose Amazon and Apple negotiate cloud services sharing or


collaborative R&D.

• Bargaining:
• Both benefit from cooperation but want to maximize their own share of gains.

• Fallback:
• Without agreement, they have to build costly alternatives themselves.
Conclusion
• Each party has something to gain from cooperation.

• Each has a fallback if negotiation fails.

• The Nash bargaining solution gives a way to predict a "fair" agreement based on
how much each would lose without the deal.
Transferable utility game

Definition
In a transferable utility game, players can transfer utility (value) among
themselves freely — usually through payments (like money).

• The total value created can be split among players in any way.

• Only the total payoff matters, not who did what individually.

Think: "We can pool the total reward and divide it however we want."
Contd..

You might also like