contents

Intorduction to Game Theory

Introduction

Game Theory is a method of studying strategic situations, where the actions of others affect an individual’s outcomes. Strategic situations contrast with non-strategic ones, such as perfect competition and monopolies, where players do not worry about others’ actions.

Definitions

Strategic Situation

Non-Strategic Situation

Game Theory Applications

Game theory is applied across various fields including:

Game 1: Grade Scheme

Each student must choose either:

The outcomes are defined as follows:

Row Player Alpha Beta
Alpha (B-, B-) (A, C)
Beta (C, A) (B+, B+)

Where:

Analysis and Strategies

1. Dominated Strategy: A strategy that results in worse payoffs regardless of the opponent’s actions.

Definition: A strategy α strictly dominates β if:
U(α) > U(β)  for all opponents’ strategies

2. Key Lesson:

Prisoner’s Dilemma

In the classic scenario:

Example Outcomes:

Lesson Summary from Game 1

Game 2: Pick a Number

In this game, students must choose a number between 1 and 100. The winner will be the person whose number is closest to two-thirds of the average chosen number.

Example Calculation

If numbers chosen: 25, 5, and 60:
$$\text{Total} = 25 + 5 + 60 = 90 \\ \text{Average} = \frac{90}{3} = 30 \\ \text{Two-Thirds of Average} = \frac{2}{3} \times 30 = 20$$
The winner with the closest number to 20 will receive the prize.

Key Takeaways

Game Theory Lecture Notes

Introduction to Game Theory

In this lecture, we explore the foundational concepts of Game Theory, including normal-form games, strategies, outcomes, and payoffs. We emphasize the importance of understanding not only one’s own payoffs but also the payoffs of others.

Game Structure

Strictly Dominated Strategies

A strategy si is strictly dominated by another strategy si if:
Ui(si, s − i) > Ui(si, s − i) for all s − i.

Weakly Dominated Strategies

A strategy si is weakly dominated by si if:
Ui(si, s − i) ≥ Ui(si, s − i) for all s − i  and  Ui(si, s − i) > Ui(si, s − i) for at least one s − i.

Prisoners’ Dilemma: Real-World Applications

Example: Normal-Form Game

Consider a simple game with two players I and II:

The payoff matrix is as follows, with player I’s payoff listed first and player II’s listed second:
$$\begin{array}{c|c|c|c} & L & C & R \\ \hline T & (5,-1) & (11,3) & (0,0) \\ \hline B & (6,4) & (0,2) & (2,0) \\ \end{array}$$

Analysis of the Game

Rationality and Common Knowledge

Conclusion and Real-World Implications

The lecture summarized the foundational elements of Game Theory, illustrated through various examples, including the Prisoners’ Dilemma and specific strategies in normal-form games. Understanding payoffs, recognizing dominated strategies, rationality, and the essence of common knowledge are crucial for predicting behavior in strategic settings.

Notes on Game Theory: Iterative Deletion of Dominated Strategies

Introduction

In this lecture, we discussed the concept of Iterative Deletion of Dominated Strategies. The main idea involves analyzing strategies in a game to identify which ones are dominated, deleting those strategies, and re-evaluating the game iteratively until no further strategies can be eliminated. The process is summarized by the following steps:

  1. Identify dominated strategies.

  2. Delete those strategies.

  3. Repeat the process on the modified game.

Key Concepts

Dominated Strategies

A strategy is considered dominated if there is another strategy that yields a better payoff regardless of what the opposing player does.

Application in Political Elections

The discussion transitioned to a practical application of the iterative deletion of dominated strategies using a political model where two candidates position themselves on a political spectrum ranging from left (1) to right (10):

Median Voter Theorem

The culmination of the discussion on political positioning leads to the Median Voter Theorem, which states that in a majoritarian election, candidates will move towards the median voter’s position. This results in both candidates converging towards the center of the political spectrum.

Prediction and Historical Examples

Historical elections, such as those between Kennedy and Nixon (1960) and Clinton’s election strategy (1992), exemplify this theorem where candidates adopted moderate positions to appeal to a broader electorate.

Best Response Dynamics

We also touched upon the concept of Best Response where:

The example provided discussed a game with two players where each chooses among strategies and the payoffs were calculated to determine optimal strategies based on expected outcomes.

Graphical Representation

Instead of tedious calculations, graphical representations of payoffs based on beliefs about opponents’ strategies were utilized:

This allows for visual identification of the best responses in various scenarios.

Conclusion

Modeling is a crucial tool in economics and political science to capture and simplify real-world complexities. The iterative deletion of dominated strategies and the Median Voter Theorem provides insightful frameworks for understanding strategic behavior in competitive settings, while best response strategies give a structured approach to decision-making based on expectations.

Lecture Notes on Game Theory: Best Response and Nash Equilibrium

Introduction

The Penalty Kick Game

Game Setup

Probabilities and Payoffs

Strategy Dominance

Expected Payoff Calculation

Expected payoff based on beliefs about goalie behavior, represented as a function of the probability p that the goalie dives to the right:
E(Left) = 4(1 − p) + 9p

E(Middle) = 6(1 − p) + 6p = 6

E(Right) = 9(1 − p) + 4p

Conclusion of the Penalty Kick Game

The Partnership Game

Game Setup

Player Payoffs

Best Response Analysis

Finding Nash Equilibrium

Efficiency of Effort

Conclusion

Notes on Nash Equilibrium

Introduction to Nash Equilibrium

Nash Equilibrium (NE) is a fundamental concept in game theory that describes a situation in which no player can benefit by unilaterally changing their strategy given that the other players’ strategies remain unchanged.

Formal Definition

A strategy profile (S1*, S2*, …, SM*) is a Nash Equilibrium if:
i ∈ {1, 2, …, M},  Si* is a best response to S − i*
where S − i* represents the strategies chosen by all players except player i.

Key Characteristics

Motivation for Studying Nash Equilibrium

1. Common Application: NE is widely used in various real-world applications and is featured in textbooks. 2. Self-Fulfilling Beliefs: If all players believe others will play strategies from a Nash Equilibrium, then it is rational for them to play those strategies as well.

Examples and Best Response

Consider a simple game with two players, each having three strategies.

Example Payoff Matrix

Left Center Right
Up (0,4) (4,0) (5,3)
Middle (4,0) (0,4) (5,3)
Down (3,5) (3,5) (6,6)

Finding Best Responses

Nash Equilibrium

By identifying best responses, we find the Nash Equilibrium is:
Nash Equilibrium: (Down, Right) at (6, 6)

Dominance vs. Nash Equilibrium

Dominated Strategies

No strictly dominated strategy can ever be played in a Nash Equilibrium, as it will not be a best response to any strategy.

Weakly Dominated Strategies

Unlike strictly dominated strategies, weakly dominated strategies can still be part of Nash Equilibria.

Coordination Games

Coordination problems arise when players must align their strategies for mutual benefit. Examples include:

The Investment Game

In this game:

Outcomes

Two Nash Equilibria:

The convergence towards no one investing illustrates coordination failure.

Conclusion

1. Nash Equilibria are vital in understanding strategic interactions across economics and social sciences. 2. Outcomes in coordination games illustrate challenges and potential failures in achieving optimal equilibria, despite rational calculations.

Lecture Notes on Coordination Games and Cournot Duopoly

Coordination Games

Introduction

In coordination games, communication can be crucial in achieving better Nash Equilibriums, unlike in the Prisoner’s Dilemma where communication does not help.

Key Properties

Example: Simple Coordination Game

Consider the payoff matrix:
$$\begin{array}{c|c|c} & \text{Player 2: Left} & \text{Player 2: Right} \\ \hline \text{Player 1: Up} & (1,1) & (0,0) \\ \hline \text{Player 1: Down} & (0,0) & (1,1) \\ \end{array}$$
In this matrix, the players need to coordinate on either (Up, Left) or (Down, Right) to achieve the best outcome.

Communication and Coordination

A historical example: the aftermath of Hurricane Katrina exemplifies the importance of coordination in crises.

The Investor Game

In the investment game, the more players expect others to invest, the more likely they are to invest themselves. This feature indicates the presence of strategic complements.

Battle of the Sexes

Game Setup

Consider two players choosing movies. They have preferences described as follows:

Payoff matrix:
$$\begin{array}{c|c|c} & \text{Player 2: Good Shepherd} & \text{Player 2: Bourne Ultimatum} \\ \hline \text{Player 1: Bourne Ultimatum} & (0, 1) & (1,0) \\ \hline \text{Player 1: Good Shepherd} & (0, 0) & (1, 0) \\ \end{array}$$
Snow White is dominated by both players.

Cournot Duopoly

Game Setup

In Cournot Duopoly, two firms compete in the same market by choosing quantities of an identical product. The strategies are the quantities produced, q1 and q2.

Revenue and Cost Functions

1. Price Function:
P = a − b(q1 + q2)
where a and b are parameters affecting price.

2. Profit Function for Firm 1:
π1 = P ⋅ q1 − c ⋅ q1 = (a − b(q1 + q2))q1 − cq1

Best Response Functions

To find the Nash Equilibrium, each firm maximizes its profit by taking the other’s output as given. 1. Differentiate profit w.r.t. quantity. 2. Set the derivative to zero for optimal output.

Best response of Firm 1:
$$q_1^* = \frac{a - c}{3b}$$
Best response of Firm 2:
$$q_2^* = \frac{a - c}{3b}$$

Nash Equilibrium

At Nash Equilibrium, both firms produce:
$$q_1^* = q_2^* = \frac{a - c}{3b}$$

Comparative Outcomes

Implications

Cournot Equilibrium leads to lower industry profits compared to a monopoly but higher than perfect competition. It reflects the tension between competition and cooperation in oligopolistic markets.

Notes on Imperfect Competition

Introduction to Imperfect Competition

Cournot Model Recap

Bertrand Competition

Transition to Price Competition

Demand Structure

Profit Maximization

Nash Equilibrium

Key Takeaways

Differentiated Products

Political Implications

Notes on Candidate-Voter Model and Mixed Strategies

Candidate-Voter Model

The candidate-voter model represents a scenario where voters may also be candidates; however, candidates cannot choose their positions. This is a departure from earlier models like the Downs or median-voter model, which predicted that candidates would crowd at the center.

Key Lessons from the Model

  1. Multiple Nash Equilibria:

    • There are numerous Nash equilibria in which candidates are not necessarily crowded at the center.

    • Not all equilibria require candidates to be positioned at the center.

  2. Entry Effects:

    • Entering on the left can lead to a right candidate winning.

    • Conversely, entering on the right can skew the victory towards a left-wing candidate.

  3. Proximity to Center:

    • Candidates cannot be positioned too far apart. If they are too distant, a candidate positioned closer to the center will likely enter and win.

Equilibrium Analysis

The candidates in the model can be positioned within a certain range. If we assume the political spectrum is represented on the interval [0, 1], the following conditions hold for the existence of equilibria:
$$\text{If } C_L \text{ is the left candidate's position and } C_R \text{ is the right candidate's position, then } C_L > \frac{1}{6} \text{ and } C_R < \frac{5}{6}$$
If candidates exceed these bounds, then a new candidate enters from the center.

Final Observations

The candidates’ positions can be anywhere in the bounded interval [1/6, 5/6], illustrating that while extreme positions are unfavorable, there exists freedom to maneuver closer to the center without forcing crowding there.

Introduction to Mixed Strategies

The next stage is the introduction of mixed strategies, where players can randomize their choices instead of sticking with a pure strategy.

Example: Rock, Paper, Scissors

The payoff matrix for the game is:
$$\begin{array}{c|c|c|c} & \text{Rock} & \text{Paper} & \text{Scissors} \\ \hline \text{Rock} & (0,0) & (1,-1) & (-1,1) \\ \hline \text{Paper} & (-1,1) & (0,0) & (1,-1) \\ \hline \text{Scissors} & (1,-1) & (-1,1) & (0,0) \\ \end{array}$$

Existence of Nash Equilibrium in Mixed Strategies

Expected Payoffs

To prove that playing a mixed strategy is a Nash Equilibrium, we compute the expected payoffs for a player choosing each action against an opponent who plays 1/3 for each action:
$$\begin{aligned} \text{Expected Payoff for Rock} &= \frac{1}{3}(0) + \frac{1}{3}(1) + \frac{1}{3}(-1) = 0 \\ \text{Expected Payoff for Paper} &= \frac{1}{3}(-1) + \frac{1}{3}(0) + \frac{1}{3}(1) = 0 \\ \text{Expected Payoff for Scissors} &= \frac{1}{3}(1) + \frac{1}{3}(-1) + \frac{1}{3}(0) = 0 \end{aligned}$$

From this, it follows that each player has no incentive to deviate from this strategy since the expected payoff remains 0 regardless of their mixed strategy.

Conclusion

The exploration of the candidate-voter model highlights key aspects of strategic decision-making in political science. Transitioning to mixed strategies broadens our understanding of game theory applications, particularly through familiar examples like "rock, paper, scissors". This illustrates the importance of strategy randomness in achieving equilibrium in competitive environments.

Lecture Notes on Mixed Strategies and Nash Equilibrium

Introduction to Mixed Strategies

In the previous lecture, we examined the application of mixed strategies in the game of Rock, Paper, Scissors, where the optimal strategy concluded with a probability distribution of $\frac{1}{3}, \frac{1}{3}, \frac{1}{3}$.

Definition of a Mixed Strategy

A mixed strategy Pi for player i is defined as a randomization over i’s pure strategies. The notation Pi(si) denotes the probability that player i selects pure strategy si given the mixed strategy Pi.


Pi = (Pi(R), Pi(P), Pi(S)).

For example, if the probabilities for rock, paper, and scissors are set as $P_i(R) = \frac{1}{3}, P_i(P) = \frac{1}{3}, P_i(S) = \frac{1}{3}$.

Properties of Mixed Strategies

1. Zero Probability: A mixed strategy does not have to include all pure strategies; some can have zero probability. For instance, a mixed strategy could be $P = \left( \frac{1}{2}, \frac{1}{2}, 0 \right)$.

2. Pure Strategy: If the probability assigned to a pure strategy si equals 1, then that strategy is classified as a pure strategy si.

Expected Payoffs from Mixed Strategies

The expected payoff from mixed strategy Pi is determined as follows:


Expected Payoff(Pi) = ∑Pi(sj) × Payoff(sj).

Consider the example from the “Battle of the Sexes” game where Player I adopts a mixed strategy $P = \left( \frac{1}{5}, \frac{4}{5} \right)$ and Player II adopts $Q = \left( \frac{1}{2}, \frac{1}{2} \right)$.

To compute Player I’s expected payoff based on the payoffs (2, 0) for strategy A and (0, 1) for strategy B:

1. For Pi(A):
$$E_P(A) = \frac{1}{2} \cdot 2 + \frac{1}{2} \cdot 0 = 1.$$

2. For Pi(B):
$$E_P(B) = \frac{1}{2} \cdot 0 + \frac{1}{2} \cdot 1 = \frac{1}{2}.$$

Finally, the expected payoff of using mixed strategy P:
$$\text{Expected Payoff}(P) = \frac{1}{5} \cdot 1 + \frac{4}{5} \cdot \frac{1}{2} = \frac{3}{5}.$$

This shows the expected payoff must lie between the payoffs from each pure strategy:


$$\frac{1}{2} < \frac{3}{5} < 1.$$

Nash Equilibrium in Mixed Strategies

A mixed strategy profile (P1*, P2*, …, PN*) is a Nash Equilibrium if each player’s strategy Pi* is a best response to the strategies of all other players P − i*.

The key insight is:

Example: Tennis

Consider a simplified game involving two players, Venus and Serena, where each must choose to hit the ball to the left or the right:


$$\begin{array}{c|c|c} & \text{Serena Left (L)} & \text{Serena Right (R)} \\ \hline \text{Venus Left (L)} & (50,50) & (80,20) \\ \hline \text{Venus Right (R)} & (90,10) & (20,80) \\ \end{array}$$

From the payoff table, we can compute the respective expected payoffs for mixed strategies based on the probabilities assigned by both players.

Conclusions

Lecture Notes on Mixed Strategies and Nash Equilibria

Overview of the Lecture

In this lecture, we explore the concept of mixed strategies and, particularly, mixed-strategy equilibria. The main ideas discussed include the necessary conditions for equilibrium in mixed strategies and practical applications in game theory.

Key Concepts

Mixed Strategies

A mixed strategy is one in which a player randomizes over two or more pure strategies. If a player is playing a mixed strategy in equilibrium, each pure strategy in the mix must be a best response to the other player’s strategy.

Mixed-Strategy Equilibria

For two players, say Player I and Player II, with mixed strategies P* and Q*, one must confirm that:

Example: Venus and Serena Game

We analyzed Venus’ strategy (denote as P*) against Serena’s mix (denote as Q*). Suppose:
$$\begin{aligned} P^* &= 0.7 \quad \text{(probability of playing L)} \\ 1 - P^* &= 0.3 \quad \text{(probability of playing R)} \\ Q^* &= 0.6 \quad \text{(probability of Serena playing l)} \\ 1 - Q^* &= 0.4 \quad \text{(probability of Serena playing r)}\end{aligned}$$

Payoff Calculation for Venus

To verify that P* is a best response to Q*:

This confirms that maintaining mixed strategies yields the same expected payoffs.

Strictly Profitable Deviations

It is shown that any deviation from the mixed strategy yields either the same or a lower payoff, confirming that:
No strictly profitable pure-strategy deviations exist.
Additionally, any mixed strategy deviation yields the same expected payoff as the pure strategies.

Applications of Mixed Strategies

Sports

Mixed strategies are notably relevant in sports contexts, such as:

Tax Audits

The concept of mixed strategies can apply to tax compliance, where taxpayers may choose to report honestly or cheat, and auditors may choose to audit based on the likelihood of compliance. The equilibrium:
$$\begin{aligned} Q &= \text{proportion of honest taxpayers} \\ P &= \text{probability of being audited}\end{aligned}$$
can be calculated similarly, yielding insights into compliance behavior.

Policy Experiment: Changing Audit Outcomes

When policy changes are introduced (e.g., increasing fines for tax evasion), the equilibrium outcomes might not change as expected if the audit payoffs remain static. However, tweaking the audit’s impact can shift taxpayer compliance.

Lessons Learned

Conclusion

The discussion of mixed strategies and Nash equilibria emphasizes the complexity of strategic decision-making, offering insights applicable beyond theoretical contexts into real-world scenarios like sports, business, and regulatory environments.

Notes on Evolution and Game Theory

Introduction

In this lecture, we explore the relationship between evolution and Game Theory. We will discuss how concepts of Game Theory can be applied to biological systems, particularly animal behavior, and how evolutionary principles can inform social sciences.

Supplementary Materials

An additional reading packet is available, which may enhance understanding but is not compulsory. A handout accompanying the lecture will be provided.

Evolution in the Context of Game Theory

Reasons for Studying Evolution in Game Theory

1. Impact of Game Theory on Biology: In recent decades, Game Theory has significantly influenced biology, especially in the context of animal behavior.

2. Influence of Evolutionary Biology on Social Sciences: Evolutionary principles are often used metaphorically in fields like political science and economics. A notable example is competition among firms mirroring survival of the fittest in nature.

Modeling Evolutionary Dynamics

We will consider a simplified model focusing on within-species competition through symmetric two-player games. The process involves:

Population Dynamics: We assume a large population where individuals are matched randomly for interactions.
Strategy Outcomes:

Key Assumptions

Focus on asexual reproduction to simplify dynamics. Real-world complexities are acknowledged but will not be included in this introductory model.

Evolutionarily Stable Strategies (ESS)

A strategy S is termed evolutionarily stable if, when the population predominantly plays S, any mutant strategy S cannot invade and succeed.

Formal Definition

A pure strategy S is evolutionarily stable if:
Payoff(S, S) > Payoff(S′, S)  ∀S
and if Payoff(S, S) = Payoff(S′, S), then:
Payoff(S, S′) > Payoff(S′, S′)

Illustrative Example: Prisoner’s Dilemma

Let’s analyze cooperation and defection within the classic Prisoner’s Dilemma setup: - Payoff matrix:
$$\begin{array}{c|c|c} & C & D \\ \hline C & (2,2) & (0,3) \\ \hline D & (3,0) & (1,1) \\ \end{array}$$

Analysis of Cooperation:
- If the population consists entirely of cooperators C:

Key Lessons and Insights

  1. Natural Selection: Strategies that appear advantageous might not always lead to cooperative behaviors (e.g., infanticide in baboons).

  2. Dominance Implications: A strictly dominated strategy is never evolutionarily stable.

  3. Nash Equilibrium Connection:

Understanding Nash Equilibria vs. ESS

Consider a simple two-player game:
$$\begin{array}{c|c|c} & A & B \\ \hline A & (1,1) & (0,0) \\ \hline B & (0,0) & (0,0) \\ \end{array}$$
The Nash equilibria of this game are (A, A) and (B, B). However, (B, B) is not evolutionarily stable because a rare mutation of (A, A) would prove more favorable.

Conclusion

In summary, the lecture emphasizes valuable intersections between evolutionary biology and Game Theory, underlining the vital roles of strategy stability, reproductive dynamics, and the implications of Nash equilibria within ecological and economic frameworks.

Evolutionary Stability and Nash Equilibrium Notes

Introduction

These notes cover the concept of evolutionary stability in relation to Nash equilibrium, primarily focused on pure strategies with applications to various examples, including social conventions and biological strategies.

Key Definitions

Nash Equilibrium

A strategy profile (S, S) is a Nash Equilibrium if no player can benefit from unilaterally changing their strategy, i.e.,
Ui(S, S) ≥ Ui(T, S)  ∀T (for player i)

Evolutionarily Stable Strategies (ESS)

A strategy S is an evolutionarily stable strategy if:

  1. (S, S) is a Nash Equilibrium.

  2. If (S, S) is not strict, meaning there exists some strategy T such that U(S, S) = U(T, S), then
    U(S, T) > U(T, T)

Example 1: Simple Game

Consider a game where strategies A and B yield the following payoffs:


$$\begin{array}{c|c|c} & A & B \\ \hline A & 1 & 0 \\ \hline B & 0 & 0 \\ \end{array}$$

In this case, the symmetric Nash Equilibrium is (A, A):

Example 2: Social Convention Game

Consider driving on the left or right side of the street (a social convention) with the following payoff matrix:


$$\begin{array}{c|c|c} & \text{Left} & \text{Right} \\ \hline \text{Left} & 2 & 0 \\ \hline \text{Right} & 0 & 1 \\ \end{array}$$

The Nash equilibria here are both (L, L) and (R, R):

Example 3: Hawk-Dove Game

A more complex example involves the Hawk-Dove game, significantly used in evolutionary biology. The payoff matrix is:


$$\begin{array}{c|c|c} & \text{Hawk} & \text{Dove} \\ \hline \text{Hawk} & V - \frac{C}{2} & V \\ \hline \text{Dove} & 0 & \frac{V}{2} \\ \end{array}$$

Identification and Predictions

This section highlights the importance of identification and testability in evolutionary game theory. For instance:

Conclusion

Evolutionary stability does not universally lead to efficiency, often revealing unexpected outcomes in population strategies. Through various examples, rigorous methods can persuasively justify the dominant strategies and their stability in biological contexts.

Notes on Game Theory: Cash in a Hat

Introduction

This lecture introduces a game called "Cash in a Hat," which is a simplified representation of real-world economic interactions between lenders and borrowers. The game features two players—Player I is the investor (lender), while Player II is the entrepreneur (borrower).

Game Setup

Players and Choices

Payoffs

The payoffs of the game are as follows:

For Player I:

For Player II:

Game Analysis

The analysis of the game reveals it as a sequential move game due to the observable moves of Player I by Player II. The concept of Backward Induction will be used to analyze the players’ strategies.

Sequential Moves and Backward Induction

Key Concepts

Decision Process

Player I must consider Player II’s best response based on Player I’s action. If Player I puts in $1, Player II will likely match it (resulting in (1, 1.50)). If Player I puts in $3, Player II will likely take the cash (resulting in a loss of $3 for Player I).

Incentive Compatibility and Moral Hazard

Definition

The potential misalignment of interests between lenders and borrowers, known as Moral Hazard, arises when borrowers may not act in the best interest of the lender after a loan is given.

Commitment Strategies

To mitigate issues such as moral hazards, commitment strategies can be applied. They include:

Conclusion

The concepts facilitated through the Cash in a Hat game provide insight into significant economic and strategic interactions. Understanding backward induction and incentive design are crucial for interpreting and analyzing real-world scenarios.

Key Takeaways

Lecture Notes: Quantity Competition

Introduction

In today’s lecture, we revisit quantity competition, specifically the Cournot model and its sequential counterpart, the Stackelberg model. We examine the implications of firms moving simultaneously versus sequentially in choosing their output levels.

The Cournot Model

Overview

In the Cournot model, two firms, Firm 1 and Firm 2, choose their output levels simultaneously. Let Q1 be the output chosen by Firm 1 and Q2 be the output chosen by Firm 2. The total quantity supplied to the market is given by:
Q = Q1 + Q2.

Market Demand

The market demand curve is represented as:
P = A − B(Q1 + Q2),
where - P is the price, - A is a constant representing the intercept of the demand curve, - B is the slope of the demand curve.

Profit Function

The profit for each firm can be expressed as:
πi = P ⋅ Qi − C ⋅ Qi = (A − B(Q1 + Q2))Qi − CQi  (i = 1, 2),
where C is the constant marginal cost.

Best Response Functions

The best response function for Firm 1 given Firm 2’s output Q2 is derived by maximizing its profit:
Q1* = f(Q2),
and similarly for Firm 2:
Q2* = g(Q1).

The Nash Equilibrium occurs where the best response functions intersect.

The Stackelberg Model

Sequential Decisions

In the Stackelberg model, we analyze a sequential move game where Firm 1 moves first, and Firm 2 observes Firm 1’s choice before selecting its own output.

Backward Induction

To solve for the outcomes, we use backward induction. We begin with Firm 2’s decision: - Given Q1, Firm 2 maximizes its profit by choosing Q2.

Firm 2’s Decision

Firm 2’s profit maximization problem can be similarly set up as before, leading to:
$$Q_2^* = \frac{A - C - BQ_1}{2B}.$$

Firm 1’s Decision

Firm 1 anticipates how Firm 2 will respond and chooses Q1 to maximize its profit:
π1 = (A − B(Q1 + Q2))Q1 − CQ1.

Substituting in Firm 2’s best response gives:
$$Q_1^* = \frac{A - C}{2B}.$$

Equilibrium Outcomes

At equilibrium:
$$Q_2^* = \frac{A - C}{4B}.$$

Let us summarize key outcomes: - Firm 1’s output under Stackelberg is greater than in Cournot. - Firm 2’s output under Stackelberg is less than in Cournot. - Total output increases:
Q1 + Q2 = Q1* + Q2* > Q1C + Q2C.
- Market prices decrease as total output increases, benefiting consumers.

Analyzing Strategic Moves

Let’s consider the implications of sequential versus simultaneous moves:

Conclusion

This lecture emphasizes the critical roles of timing and information in competitive settings. Understanding whether a first-mover or second-mover strategy is beneficial can greatly influence firm profits and market outcomes.

Lecture Notes on Game Theory: Zermelo’s Theorem and Strategic Games

Introduction

In the previous lecture, we analyzed the game of Nim, a strategic game where players alternately remove stones from two piles. The objective is to be the player who takes the last stone. We explored how initial conditions can determine which player has a winning strategy.

Zermelo’s Theorem

This theorem provides insights into determining winners in games of perfect information.

Definition of Perfect Information

A game possesses perfect information if, at every decision-making point, a player knows all previous actions taken in the game. For example:

Conditions of Zermelo’s Theorem

  1. There are two players in the game.

  2. The game has a finite number of nodes, ensuring no infinite decision paths.

  3. The game has three possible outcomes:

Under these conditions, Zermelo’s theorem states that:

Either Player 1 can force a win, or Player 1 can force a tie, or Player 2 can force a win (loss for Player 1).

Games can thus be categorized into those where:

Illustrative Examples

The Game of Nim

Tic-Tac-Toe

Tic-Tac-Toe is a game where, with optimal play from both players, the outcome is always a tie.

Checkers

Checkers can also be analyzed using Zermelo’s theorem, indicating a defined winning strategy, although the solution may not be explicitly known.

Chess

Chess also meets the conditions of Zermelo’s theorem with many possible outcomes. It is known that there is a winning strategy (though undiscovered).

Proof by Induction

We will prove Zermelo’s theorem by induction on the maximum length of the game, denoted N.

Base Case: N = 1

The game has only one move. Possible outcomes are:

In these cases, solutions are straightforward since we define outcomes by their immediate results.

Inductive Step

Assume the theorem holds for all games of length  ≤ N. We need to show it holds for games of length N + 1.

  1. Consider a game tree of length N + 1.

  2. The first move by Player 1 leads to various sub-games that are either of length N or shorter.

  3. Each sub-game has a determined outcome based on our inductive assumption.

  4. Thus, Player 1 can choose the optimal sub-game leading to a solution.


If sub-game has solution W:  then Player 1 wins.

If sub-game has solution T:  then Player 1 ties.

If sub-game has solution W2:  then Player 2 wins.

Strategies in Games of Perfect Information

A strategy in perfect information games is defined as a complete plan of action detailing which decisions a player will make at each of their decision nodes.

Example Game

In a tree with Player 1 choosing between Up (U) or Down (D) and Player 2 choosing Left (L) or Right (R based on Player 1’s choice):
Possible strategies for Player 1:

Strategies should include possible actions even if they might never be reached.

Backward Induction Analysis

  1. Analyze the game from the end.

  2. Determine Player 2’s best response based on Player 1’s actions.

  3. Work backward to discover the optimal actions for Player 1.

Nash Equilibrium

The Nash equilibrium provides additional understanding of strategy in games. It identifies strategy pairs from which no player can improve their outcome by unilaterally changing their strategy.

Example of a Nash Equilibrium

In a game:

Payoff outcomes:

This leads to Nash equilibria but may result in non-credible threats.

Conclusion

Understanding strategic game theory through concepts like Zermelo’s theorem enhances our ability to analyze games thoroughly. The applications range from simple games like Tic-Tac-Toe to complex games like chess and economics.

The critical takeaway is the assurance of a solution for each game falling under these definitions, alongside the intricacies introduced through strategies, backward induction, Nash equilibria, and the importance of credible threats in potential market competitions.

Notes on Game Theory: Entry Deterrence and Reputation

Overview

This lecture provides a detailed analysis of a game involving an incumbent monopolist facing potential entrants in a market. The key concepts discussed include Nash Equilibria, backward induction, and the role of reputation in strategic decision-making.

Game Setup

We considered a game with two players:

Payoffs

If the Entrant stays out: The Incumbent remains a monopolist and earns $3 million in profit.
If the Entrant enters:

Analyzing the Game

Using a matrix form, we found two Nash Equilibria:

  1. Entrant goes in and the Incumbent does not fight.

  2. Entrant stays out and the Incumbent fights.

However, backward induction suggests that the more rational outcome is for the Entrant to enter while the Incumbent chooses not to fight.

Game Extension: Multiple Markets

The lecture then extended the initial game to a scenario where the Incumbent monopolist, Ale, holds a monopoly in 10 different markets. Each Entrant enters the market one after the other. The key elements are:

Sequential Entry Decisions

Using examples from students, the outcomes of hypothetical decisions were examined:

Reputation and Entry Deterrence

The discussion highlighted how the Incumbent may develop a reputation as a tough competitor, which can deter future entries. The idea of a "not credible threat" by the Incumbent arises:

Establishing Reputation

To formalize the reputation concept, the lecture introduced the idea of a small probability, say 1%, that the Incumbent is "crazy" and enjoys fighting. The mechanics are as follows:

The model also emphasizes that even if Ale is sane, faking unpredictability can still deter Entrants.

Example Game: Duel

The latter part of the lecture transitioned into analyzing a game called "duel," illustrating strategic timing:

Key Notation


Π[d] : Player i’s probability of hitting if shooting from distance d.

Assumptions and Analysis

The analysis assumed that:

The conclusion established a distance, denoted as d*, at which the first attack should occur. The logic follows that, prior to d*, no player has an incentive to shoot. At d*, however, the decision dynamics change based on players’ knowledge of each other’s abilities and intended actions.

Conclusion

We learned that:

  1. Reputation plays a crucial role in market entry decisions. A faint chance of aggressive behavior can alter outcomes.

  2. Backward induction and dominance arguments help solve complex strategic decisions, emphasizing when to act in games.

  3. As seen in the duel game, the notion of timing decisions can influence outcome effectiveness significantly.

The lecture showcases how game dynamics can seemingly contradict standard theoretical predictions and highlights the importance of considering psychological and historical contexts in strategic decision-making.

Notes on Ultimatum and Bargaining Games

Introduction

In this lecture, we will explore two types of games, specifically focusing on ultimatum and bargaining games.

Ultimatum Game

Game Structure

The ultimatum game consists of two players, Player 1 and Player 2. Player 1 makes a "take it or leave it" offer concerning a pie worth $1. The split can be denoted as:

Player 1 receives S
Player 2 receives (1 − S)

Player 2 has two choices:

Example Offers

The lecture included real-life examples where various students made offers. Below are some examples of offers and their outcomes:

Backward Induction Analysis

Two-Period Bargaining Game

Structure of the Game

In the two-period game:

The key addition is that if the game goes to the second stage, the total value of the pie shrinks to δ, where 0 < δ < 1.

Discounting

Discounting refers to the idea that money today is worth more than money tomorrow:
Value of $1 tomorrow  = δ × 1 (in today’s dollars)

Analysis of the Two-Period Game

Using backward induction, we analyze the second round:

General Observations

The real-world implications of these games reveal insights:

Conclusions

In conclusion, bargaining behavior can reveal significant insights into economic decisions beyond simple transactional models. The observations around fairness, reputation, and individual discount rates can influence outcomes in ways traditional models may not predict accurately.

Notes on Game Theory

Overview of Game Types

In this lecture, we distinguish between two types of games:

We aim to analyze games that involve both simultaneous and sequential moves, extending the technique of backward induction.

Example of a Sequential Move Game

Consider a simple game in which:

The payoffs are as follows:
$$\begin{array}{|c|c|c|} \hline \text{Player 1's Choice} & \text{Player 2's Choice} & \text{Payoffs} \\ \hline \text{up, left} & (4,0) \\ \text{up, right} & (0,4) \\ \text{middle, left} & (0,4) \\ \text{middle, right} & (4,0) \\ \text{down, left} & (1,2) \\ \text{down, right} & (0,0) \\ \hline \end{array}$$

To solve this game, we apply backward induction:

Thus, Player 1 predicts Player 2’s responses:

So, Player 1 chooses down, and Player 2 chooses left.

Information Sets

Now, consider a new scenario where Player 2 cannot distinguish between the upper and middle nodes. We represent this with an information set:

Player 2 knows that Player 1 chose either up or middle, but cannot determine which. This changes the optimal structure of Player 2’s responses.

Backward Induction with Information Sets

In this case, Player 2’s response to uncertainty affects Player 1:

Formal Definitions

Perfect Information

Definition: A game of perfect information is one where every information set contains only one node.

Imperfect Information

Definition: A game of imperfect information consists of at least one information set that contains more than one node.

Strategies

For games with imperfect information, a strategy for Player i defines actions at each information set:

Nash Equilibrium

A Nash Equilibrium occurs when no player can benefit from unilaterally changing their strategy, given other players’ strategies.

Subgame Perfect Equilibrium (SPE)

Definition: A Nash Equilibrium is a subgame perfect equilibrium if it induces a Nash Equilibrium in every subgame.

Identification of Subgames
  1. A subgame starts at a single node.

  2. It includes all successor nodes of that starting node.

  3. It does not break any information sets.

Conclusion

In summary, this class focuses on the significance of information in game theory. We have learned:

This concludes the class, and further exploration will involve applications using these concepts.

Notes on Game Theory: Sub-game Perfect Equilibrium

Introduction

In this lecture, we explored several advanced concepts in game theory, primarily focusing on sub-game perfection and how it relates to Nash equilibrium.

Key Concepts

Imperfect Information

Strategies and Information Sets

Sub-games

Sub-game Perfect Equilibrium (SPE)

Example: "Don’t Screw Up" Game

Game Structure


$$\begin{array}{|c|c|c|c|} \hline & \text{Left} & \text{Right} \\ \hline (U, U) & (4, 3) & (1, 2) \\ \hline (U, D) & (3, 1) & (1, 2) \\ \hline (D, U) & (2, 1) & - \\ \hline (D, D) & (2, 1) & - \\ \hline \end{array}$$

Solutions

Backwards Induction: Start from the last stage and analyze the outcomes:

This results in the backward induction equilibrium being (U, L).

Nash Equilibria

There are three Nash equilibria:

  1. (U, U), L → (4, 3) (Backward Induction)

  2. (D, U), R → (2, 1)

  3. (D, D), R → (2, 1)

Identifying Sub-game Perfect Equilibrium

1. Last sub-game analysis (Player 1’s decisions):

2. Second sub-game (Player 2’s decisions):

Final SPE confirmed is (U, L).

Example: The Matchmaker Game

Setting the Scenario

Strategies and Outcomes


$$\text{If sent: Payoffs:} \begin{cases} (2, 1) & \text{if Gaddis-Gaddis (D, D)} \\ (1, 2) & \text{if Spence-Spence (S, S)} \\ (0, 0) & \text{otherwise} \end{cases}$$

Game Analysis

  1. Start from the last sub-game where players choose venues.

  2. Calculate the Nash equilibria and find their impacts on Player 1’s decision (to send or not).

  3. Strategic implications: Encourage softening competition through decreased outputs leading to increased profits.

Business Application Example: Cournot Competition

Initial Setup

Firm A and Firm B in Cournot competition, with prices modeled as:
$$P = 2 - \frac{1}{3}(Q_A + Q_B)$$
Marginal costs for both firms set at $1/ton.

Decision on Machine Rental

  1. Accountant’s Perspective: Sees the fixed and varying costs leading to the decision of not renting.

  2. Economic Model: Recognizes a market shift and recalibrating quantity strategies based on strategic changes.

Conclusion on Renting Decision

Incorporate strategic behavior into the equation; the ultimate analysis shows that:

Lessons Learned

  1. Importance of Sub-game Analysis: Identifying developing strategies after changes in costs or entries.

  2. Strategic Interactions Matter: Assuming other players’ strategies remain constant can lead to fundamental miscalculations in expected outcomes.

Notes on Sub-Game Perfect Equilibrium

Introduction

In this lecture, we explore the concept of sub-game perfect equilibrium (SPE) through a detailed analysis of a strategic game involving two players. We discuss applications of SPE in various practical scenarios and introduce a new game for analysis.

Strategic Effects

Game Setup

The game involves two players who simultaneously choose between two actions: Fight (F) or Quit (Q).

Game Analysis

Normal Form Representation

Player B: Quit (Q) Player B: Fight (F)
Player A: Quit (Q) (0, 0) (0, V)
Player A: Fight (F) (V, 0) (-C, -C)

Sub-Game Perfect Equilibria

1. Second Stage Analysis:

2. First Stage Analysis:

Mixed Strategy Equilibria

1. Players can adopt mixed strategies such that:
$$P_A = \text{Prob}(F) = \frac{V}{V + C}$$

$$P_B = \text{Prob}(F) = \frac{V}{V + C}$$

2. Expected Payoff:
E[A] = E[B] = 0 (since mixing leads to no ultimate gain).

Wars of Attrition

The concept from the game demonstrated relates well to real-world examples of Wars of Attrition where:

Infinite Horizon Games

We adapt this understanding to Infinite Horizon Games:

  1. The game is modeled as ongoing indefinitely; thus, the sunk costs from any prior period become irrelevant.

  2. At any period, the players can expect future payoffs based on mixed strategies established.

  3. Thus:


Probability of continuing conflict  = P2
leads to a distribution of probabilities over many periods.
In conclusion, the mixed strategies demonstrate how rational players might engage in a costly standoff more frequently than expected due to contingent future payoffs and sunk costs.

Summary

  1. The lecture provided insights into the concept of SPE and its applications in strategic thinking and behavior.

  2. Engaging in mixed strategies leads to nuanced interpretations of combat and competition, demonstrated in practical examples from economics and history.

General Notes on Repeated Interactions in Game Theory

Introduction

In this week of study, we focus on the concept of repeated interactions among players in games, particularly how such interactions can induce and sustain cooperative behavior. We explore significant examples like the Prisoners’ Dilemma, analyzing how repeated play can alter strategic decisions.

The Prisoners’ Dilemma

The Prisoners’ Dilemma is characterized by two players choosing between two strategies: cooperation or defection. The payoffs can be represented in the following matrix:


$$\begin{array}{c|c|c} & \text{Cooperate} & \text{Defect} \\ \hline \text{Cooperate} & (2,2) & (0,3) \\ \hline \text{Defect} & (3,0) & (1,1) \\ \end{array}$$

Where:

The central questions are: - Can repeated games sustain cooperation? - How and when does cooperation emerge?

Importance of Repeated Interaction

When players interact repeatedly, the prospect of future rewards and the prevention of future punishments influence their current decisions. This is captured in the following principle:

In ongoing relationships, the promise of future rewards and the threat of future punishments may provide incentives for good behavior today.

Unraveling Argument

If a game is played a finite number of times, we often face unraveling — wherein the knowledge of a final stage leads players to defect in earlier stages. Each player’s strategical decision is based on the anticipated behavior in the final round, which may lead to:

If you know there is a last round where defection is optimal, it incentivizes defection in prior rounds.

In the Prisoners’ Dilemma, if the game is known to be played for only two rounds, the players will tend to defect from the beginning, leading to the outcome of (Defect, Defect).

The Grim Trigger Strategy

A notable strategy in repeated interactions is the Grim Trigger Strategy, which states:

Cooperate until the other player defects; once defection occurs, defect forever.

This strategy creates a strong incentive for cooperation, as a single deviation will lead to continuous defection.

Sustaining Cooperation with Infinite Repetitions

When interactions are modeled into an infinite horizon (or a lengthy repetitive condition without a known endpoint), the dynamics change:

Payoffs Analysis

To analyze the potential for sustained cooperation in the Prisoners’ Dilemma under infinite repetition:

Let δ be the probability of continuing the game through the next round, which is less than one (δ < 1). The temptation to defect is then compared to the values of reward for cooperation and punishment for defection:


$$\begin{aligned} \text{Temptation to Cheat} &= \text{Payoff from Defection} - \text{Payoff from Cooperation} \\ \text{Temptation} &= (3) - (2) = 1\end{aligned}$$

The expected rewards and punishments need to be computed:


$$\text{Value of Cooperation} = 2 \cdot \left(1 + \delta + \delta^2 + \ldots\right) = \frac{2}{1 - \delta}$$

When evaluating the value of defecting:


Value of Defection = 0  (since both defect in subsequent rounds)

Inequality for Cooperation

To sustain cooperation:


Temptation ≤ Reward − Punishment

Thus for cooperation to be sustained under the Grim Trigger Strategy:


$$1 \leq \frac{2}{1 - \delta} - 0$$

Which measures the dynamics of cooperation amidst defection and necessitates a robust probabilistic consideration of future interactions.

Conclusion

In summary, repeated interactions can help sustain cooperation through strategies such as the Grim Trigger Strategy, provided that:

This exploration ultimately bridges behavioral economics with game theory, illustrating the complexity of human interactions in real-world scenarios, free from contracts or enforcement mechanisms.

Repeated Interaction and Cooperation

Introduction

In our previous session, we focused on repeated interactions, with special attention to whether we can achieve cooperation in business or personal relationships without contracts. This week’s exploration emphasizes the idea that the future of a relationship might provide incentives for good behavior today, helping to deter cheating.

The Central Intuition

Consider a business relationship between two parties, where each party supplies goods for the other. For instance, if one party supplies fruit and the other vegetables, there are opportunities for both to cheat on quality or quantity. The key intuition is that cooperation today might lead to cooperation tomorrow and vice versa, creating an incentive structure based on the future interaction.
Let VC be the value of continued cooperation and VD the value when cheating occurs. The inequality we want to satisfy is:


Gain from cheating today < VC − VD

To formalize this, let’s denote:

We need:


GC < VC − VD

Credibility of Promises and Threats

A key takeaway from previous discussions is the need for the promises of cooperation and the threats of punishment to be credible. If the relationship is known to end after a certain number of interactions, the last period may not allow for cooperation to be sustained, leading both parties to act according to Nash equilibrium strategies in the final round. This weakness in credibility results in "unraveling."


To address this issue, we focus on the concept of sub-game perfect equilibria (SPE), which ensures Nash behavior in every sub-game, particularly in the last rounds of the game.

The Prisoner’s Dilemma

We can analyze cooperation in repeated versions of the Prisoner’s Dilemma by introducing a probability δ, the chance the relationship will continue each period. Thus, at every iteration:

This set up prevents the unraveling problem, as we may now establish credible threats to deter cheating.

The Grim Trigger Strategy

A typical strategy is the Grim Trigger strategy, which states: cooperate in the first period and continue cooperating as long as the other party has not cheated. If a party cheats, they switch to defecting indefinitely.

To evaluate the credibility of this strategy, we analyze:


Temptation to cheat today < δ(Value of promise−Value of threat)

Let:

Values

1. When cheating today: T = 3 − 2 = 1. 2. The promise of cooperating indefinitely contributes $\frac{2}{1 - \delta}$. 3. The value of defection forever produces 0.

Thus, the inequality we have is:


$$1 < \delta \left( \frac{2}{1 - \delta} \right)$$

This leads to the condition:


$$\delta > \frac{1}{3}$$

Consequences of Cheating

The argument can be extended to deviations from the Grim Trigger strategy that also need to be assessed for profitability. Any deviation generating less payoff than cooperating or resulting in permanent defection strengthens our conclusions.
If an agent cheats in the first period but returns to cooperation in the second, the strategy assessments indicate that cooperation reigns virtually as long as avenues of cheating or deviation yield less than the prospect of continued loyalty and future interaction.

Real-World Applications

The analysis of repeated cooperation extends to real-world scenarios. For instance:

Conclusion

The key takeaways can be summarized:

Lecture Notes on Asymmetric Information and Signaling

Introduction

In this lecture, we will study asymmetric information, focusing on signaling. We will begin with a basic example to develop our understanding.

Cournot Competition Setup

We consider a Cournot competition involving two firms, denoted as Firm A and Firm B. Firm B has constant marginal costs denoted by cM (medium costs). Firm A can have one of three types of costs:

where ϵ is a small positive number.

Before the Cournot competition occurs, Firm A has the opportunity to reveal its true costs to Firm B in a verifiable manner. For instance, Firm A can hire an accountant to publicize its costs in a reputable journal.

Should Firm A Reveal Costs?

The key question is whether Firm A should reveal its costs:

Informational Unraveling

The concept of informational unraveling is introduced.

Key Takeaway

The lack of signaling can also provide information. The absence of attempts to signal (e.g., not revealing high costs) conveys information about the firm’s status (silence speaks volumes).

Costly Signaling Model: The Spence Model

We derive a model based on cost signaling, focusing on education as a means of signaling the productivity of workers.

Worker Types

Assuming there are two types of workers:

Education as Signaling

Workers can obtain an education (e.g., an MBA) to signal their productivity. The cost of education differs between worker types:
$$\begin{aligned} \text{Cost for Good Workers} &= 5 \text{ (per year)} \\ \text{Cost for Bad Workers} &= 10.01 \text{ (per year)}\end{aligned}$$

Equilibrium Analysis

We can derive an equilibrium where:

Verifying Equilibrium

To validate that this scenario forms an equilibrium, we check:

  1. No worker type has an incentive to deviate from their strategy.

  2. Employers’ belief about worker productivity is consistent with equilibrium behavior.

Impact of Degree Duration

The length of the MBA program significantly impacts signaling:

Lessons from the Spence Model

Conclusion

The model illustrates that education may separate good workers from bad workers without inherently increasing skills or productivity, leading to considerations about the social utility of education systems.

Lecture Notes on Auctions

Types of Value

Common Value Auction

In a common value auction, all bidders value the item the same, but may have different estimates of its value. For example, in an oil drilling auction, each company estimates how much oil is present in a field, but the actual value of the oil is the same for all once extracted. Denote the common value as:
V = Value of the good

Private Value Auction

In contrast, a private value auction occurs when each bidder has their own valuation of the item, which does not depend on others’ valuations. Let the private value for bidder i be denoted as:
Vi
Where Vi is independent of other bidders’ valuations.

Examples of Auctions

Jars of Coins

Professor Polak presents a practical example of an auction with jars of coins, where the true number of coins (common value) is unknown to bidders but they make bids based on their estimates.

Winner’s Curse

The winner’s curse is a phenomenon in common value auctions where the winning bidder tends to overestimate the value of the item won. This occurs due to the highest bidder often being the one who has the highest error in their estimate (consistent overestimation) amongst all bidders. The main lessons include:

The payoff in an auction can be defined as:
Payoff = True value − Bid

Bidding Strategies

To optimize bidding in a common value auction:

  1. Bidders should consider their bids as if they expect to win.

  2. The best estimate must account for the fact that winning implies having a higher estimate than others.

Types of Auctions

Four common types of auctions are discussed:

Comparison of Auctions

The difference in strategies is summarized as:
$$\begin{aligned} \text{Second-Price Auction:} & \quad B_i = V_i \\ \text{First-Price Auction:} & \quad B_i < V_i\end{aligned}$$

Expected Revenue

In a private values environment, under certain conditions (symmetry and independence), the expected revenue from a first-price auction and a second-price auction is the same.

Conclusion

Professor Polak concludes by emphasizing the practical implications of understanding auction types and bidding strategies. Each auction’s structure significantly affects bidding behavior and ultimate outcomes, particularly in the presence of common or private values.