/r/GAMETHEORY

Photograph via snooOG

Game theory is the science of strategy and decision-making using mathematical models.


About

This subreddit is a place for both experienced and novice strategists to gather and discuss problems, scenarios, decision-making as well as post and find articles that display the modern use of game theory.

If life is a game, learn how to play!


Rules

  1. Stay on topic! This subreddit was designed with specific subject matter in mind. Public discussion on the subreddit should pertain, at least loosely, to game theory and how it affects the world around us.

  2. Be respectful! There are many complicated facets and ideas that go into game theory and as such, there will be people who disagree with each other. This is great! This is what game theory is about; strategy, argument, victory and defeat. However, we do hope that everyone can remain civil in the discussion of such topics.


Resources

Print

Web


/r/GAMETHEORY

29,416 Subscribers

1

Help with Calculating the Nash Equilibrium for My University Game Project

Hi Guys. I created a game for a university project and need help figuring out how to calculate the Nash Equilibrium. The game is a two-player incomplete simultaneous game played over a maximum of three rounds. One player makes decisions by guessing the number of coins, and the goal is to outsmart the opponent.

To make it more interactive and to gather real-world data from people, I built a website where you can play the game. There’s also an "AI" opponent, which is based on results from a Counterfactual Regret Minimization (CFR) algorithm. If you’re curious, you can check it out here:

https://coin-game-five.vercel.app

I would be super grateful if someone could help me understand how to calculate the Nash Equilibrium for this game by hand. These are the rules:

Game Material

  • 5 coins or similar small items
  • 2 players

Game Setup

  • One player is designated as the Coin Player and receives the coins.
  • The other player becomes the Guesser.

Gameplay

The game consists of a maximum of 3 rounds. In each round:

  1. The Coin Player secretly chooses between 0 and 5 coins.
  2. The Guesser attempts to guess the number of coins chosen.
  3. The Coin Player reveals the chosen coins at the end of each round.

Rules for Coin Selection

  • The number of coins chosen must increase from round to round, with the following exceptions:
    • If 5 coins are chosen, 5 can be chosen in the next round again.
    • The Coin Player is allowed to choose 0 coins once per game in any round.
    • After a 0-coin round, the next choice must be higher than the last non-zero choice.

Game End and Winning Conditions

  • The Coin Player wins if the Guesser guesses incorrectly in all three rounds.
  • The Guesser wins as soon as he guesses correctly in any round.
0 Comments
2024/12/01
22:10 UTC

1

An addition to the Prisoner's Dilemma: Someone tell me why this is wrong

Over the past four months, as a layperson, I have tried to understand the prisoner's dilemma. I have come up with questions which I have been building on and want feedback from the experts that are present on this subreddit. The first question that I have is why the prisoner's dilemma is used with only two options i.e. defection and cooperation. Will the dilemma not be more vibrant and applicable to more scenarios if we have an intermediate option between the two states. Will that not represent a more variety of situations? The Nash equilibrium in the standard model is to mutually defect i.e. the rational choice. The conditions of this are that T > R > P > S and the second requirement is that (S+ T)/2 > R. If the above requirements are met then we can conclude that the suboptimal choice of mutual defection i.e. P (Defector's punishment) is the rational choice for both of the players. Let us now change the game. Lets introduce a new option, lets call it static. The static mode essentially allows for the defection from the other side to dampen its effect on the player who is static and also dampens the effect of the person in static towards the defector. Lets consider the static state as an intermediary situation. We see the static state in many social situations e.g. if you have a friend you are not talking to because he offended you or if you meet him with other company but don't talk to him and stay reserved are you cooperating or defecting? I think you are doing neither. I think you are in an intermediary situation. I have tried to understand this and I can find no reason to not consider an intermediary situation. Let us now talk about the prisoner's dilemma. Do the prisoner's not have a third option i.e. to refuse to answer and let the police do what they want to do or at least to neither cooperate nor defect. I have tried to understand the implications of this intermediary stage and I feel the results are a bit surprising i.e. if we introduce the intermediary stage, the nash equilibrium falls no longer on the mutual defection or P but rather a different state which is very similar to P but is most definitely not P (or mutual defection). I have attached my findings on the OSF and hope that you guys can explain what is wrong. When you open the OSF link go under the section of files and choose the file which says " The modified prisoner's dilemma (version four) (2).docx" which is the latest file which contains my findings and hypothesis. Thanks.

The following is the link:

https://osf.io/usv72/?view_only=e7bb095fe7eb43b9816c02bcaac71324

1 Comment
2024/12/01
13:01 UTC

3

Repeated simple games

Hello. I have a very simple 2x2 game, and found 2 nash. Now im asked what will happen if the game repeats for 10 times and im not sure what to say. Is it random which nash they will reach each time?

5 Comments
2024/12/01
08:33 UTC

1

How can we model alternating Stackelberg pairs?

I have yet to take a formal game theory class, however I am working on a project where I want to represent more that 2 players in a game theoretic setting. I am well aware of the limitations of this, but does anyone know if we can have alternating Stackelberg pairs? That is to say consider we have players A, B, C, D for example. Then we have pairs AB, BC, CD that can each have a leader and a follower (we can say A leads B but B leads C). Then suppose C now leads B, then we have pairs AC, CB, BD and so on. Is this a viable strategy that we can use? If not, can you please explain why, and if so, then can you please suggest further reading into the topic. I am a math major, so don't shy away from using math in your responses.

Thanks for your help!

0 Comments
2024/12/01
02:05 UTC

3

Help with Bayesian Nash Equilibrium question

Hi, I've been trying to solve the following question for the past couple of hours, but can't seem to figure it out. Bayesian NE confuses me a lot. The question:

https://preview.redd.it/xyntm49p5z3e1.png?width=1866&format=png&auto=webp&s=3f5fbbcb5d2f02b751f5dadd1bfaa1d387661f74

https://preview.redd.it/s9m3no1q5z3e1.png?width=1808&format=png&auto=webp&s=cea8699b77c46cf83b6dd86b34cc57d98aba8163

So far while trying to solve for A, i got this:

Seller's car value: ri between 1,2
Buyer's values a car at bri, and b must be > 1
Market participation:
- Seller will sell his car if price p >= Ri
- Buyer will buy a car if Bri >= price p
So for the seller, P must be >= 2, the highest value of ri
For the buyer, condition: Bri >= P --> B = 1.5 --> 1.5 * Ri >= p --> fill in Ri = 1 --> 1.5 * 1 >= p ---> p <= 1.5 ----> So for the buyer P must be 1.5 or lower

-----

Am I doing this correctly? And if yes, how should I continue and noting this down as BNE. If no, please explain why.

0 Comments
2024/11/30
05:33 UTC

5

Social/strategy game equilibrium with favored/advantaged players?

The other day I watched one of the “best” risk players in the world streaming. And the dynamic was that every other player recognized his rank/prowess and prioritized killing him off as quickly as possible, resulting in him quickly losing every match in the session.

This made me wonder: is there any solid research on player threat identification and finding winrate equilibrium in this kind of game? Something where strategy can give more quantifiable advantages but social dynamics and politics can still cause “the biggest threat” to get buried early in a match.

Not a math major or game theorist at all, just an HS math tutor. So I’ll be able to follow some explanations, but please forgive any ignorance 😅 thanks to anyone who provides an enlightening read.

1 Comment
2024/11/29
08:52 UTC

2

Help I've been stuck on this for awhile and I don't even know where to start

The trust game is a two player game with three periods. Player 1 starts off with $10. He can send an amount 0≤x≤10 to player 2. The experimenter triples the sent amount such that player 2 receives 3x. Player 2 can then send an amount 0≤y≤3x to player 1. Draw a diagram of the extensive form of this game

2 Comments
2024/11/29
00:54 UTC

4

What are the Nash Equilibria of the following payoff matrix?? How are they found?? (Thank you u/noimtherealsoapbox for the LaTeX design)

1 Comment
2024/11/28
13:36 UTC

6

Money death button

I found a button and every time I press it I get $1000. There is a warning on the button that says every time I press it there is a random 1 in a million chance I will die. How many times should I press it?

I kind of want to press it a thousand times to make a cool million bucks... I suck at probability but I thin kif I press it a thousand times there is only a 1 in 1000 chance I will die... Is that correct?

29 Comments
2024/11/27
07:07 UTC

1

Where to learn Subgame Perfect EQ?

I am extremely behind in my undergrad game theory course and the biggest thing I don’t get is subgame perfect equilibrium especially with signaling games. I can’t follow during lectures and the notes are more confusing. Is there any organic chemistry tutor-esque resource where I can intuitively learn some of the more advanced topics in game theory?

5 Comments
2024/11/27
05:02 UTC

1

Same Payoff?

If player A chooses a choice, and player B has two options that have the same payoff, what happens to determine Nash Equilibrium?

5 Comments
2024/11/26
03:26 UTC

3

5 Gold Bags Problem

Hi everyone! Here with a variant of the 2 envelopes problem that I seem to find many solutions to that are completely contradictory.

There are five bags 10, 20, 40, 80, 160 gold coins, respectively. Two bags are selected

randomly, with the constraint that one of the two bags contains twice as main coins as the

other (otherwise said, the two bags are, with the same probability, the bags containing 10

and 20 coins, or those containing 20 and 40, or 40 and 80, or 80 and 160 coins). The two

selected bags are then assigned to two players (each player gets one of the two bags with

equal probability). After seeing the contents of her bag – but not the content of the other

bag – each player is asked if she wants to switch bag with the other player. If both want to

switch, the exchange occurs.

This is just the envelope paradox rewritten, and finite. I've reached multiple solutions that are contradictory.

Firstly, either I fix the value in the two bags as U, so the two bags can either have 2U/3 or u/3 and the expected payout is 0.

Secondly, I can write that if I find U in my bag, there is an equal probability of the other bag having 2U or u/2, with an expected payout of 5U/4.

Thirdly, by backwards induction from 160, no one wants to switch (if I have 160 I won't switch, so the person who gets 80 won't switch knowing the one with 160 would never switch, thus switching only makes him potentially lose money to a person with 40.

Fourthly: we could say for example that the pairs (10,20) and (20,40) are equally likely pairs. If I as a player pick 20 and always swap, I can either get 0 if the opposing player doesn’t swap, and -10 or +20 if he swaps, which is an expected payout of +5.

So with 4 approaches that I think are all logically fine, I get different payouts and different equilibriums. I know this is supposed to be a paradox but I believe the finite edition has an answer, so what gives?

The original question is to find the Bayesian Nash Equilibrium.

Thanks a lot!

2 Comments
2024/11/23
10:16 UTC

1

Help if you can! It's a simple question but very appreciated.

0 Comments
2024/11/22
18:59 UTC

4

Looking for resources to solve tons of probabilistic games which have some risk component

Hey guys, I'm looking for resources (either textbooks or online resources) to find a bunch of games that require managing risk preferably through managing a bankroll/making decisions through some probabilistic component of the game. Interested in learning how to solve mixed nash eqs for these games and also if these games have some kelly criterion bet sizing component that would be great.

This is super specific but I'm really just looking to get more comfortable with thinking about the strategy and game theory portion of these types of problems so let me know! Thank you in advance

3 Comments
2024/11/22
05:21 UTC

3

Project idea for master's class

Hello guys,

For my master's class in Data Science, we need to implement (as a team of 2) an original project (6-8 pages of report/essay). I, with my teammate, thought of combining some of the topics the professor had presented and came up with this: "Bayesian Games with AoI (Age of Information) and Position Uncertainty". But I've been doing some research on the topic and it seems like it requires a lot of work. The deadline is mid-January. What would you say about the subject? Is it doable in a reasonable time? I'm familiar with the GT part, but I don't know how much time it would need to get acquainted with the other topics (like AoI, Physical Positioning in Wireless Networks, etc.). Here are the other topics that we can choose our project subject from:

Autonomous agents (drones, cars, intelligent vehicles)

Social models (adherence to norms, fake news, compliance)

Access problems (with many technological scenarios)

Age of Information (analytical scenario for meta-games)

Markets (provision of ICT goods)

Energy (a key technological driver)

Physical position (another wireless communication aspect)

Reflective intelligent surface (an important technological development)

Crowdsensing (federated services in the sensing realm)

Vehicular/mobile computing (networks with mobile elements and resource negotiation)

If there's a more interesting and doable in a reasonable time, please let me know!

1 Comment
2024/11/18
23:10 UTC

2

Mixed strategy norm game deduction

Hello, I have a norm game problem:

Payoff table for p1 and p2

The question asks to get pure strategies survive iterated strict dominance. I checked the solution, it shows B is strictly dominated by 2/5 A + 3/5 C, so B is eliminated.

I did not derive this mixed strategy. The only thing I got is when p2 plays a, then I set p*A + (1-p) C > B, then got p<1/2, and similar when p2 plays c. So, I got 1/3 < p < 1/2 . How can I derive that exact mixed strategy proportion in this game? Thanks.

1 Comment
2024/11/17
18:58 UTC

7

Please Help!

I'm studying for an exam tomorrow, and my lecturer has provided a sample exam, and the correct answer to this problem according to his solution is B. I understand that "Rome, (Lisbon, Lisbon)" and "Lisbon, (Rome, Rome)" work, but I can't understand how "Rome, (Rome, Lisbon)" works. I would have thought that doing the opposite of Aer Lingus - "Rome, (Lisbon, Rome)" would be the correct answer but I must be misunderstanding this, so could someone please explain this to me! Thanks

7 Comments
2024/11/17
17:00 UTC

2

Fire Emblem Expectimax AI

I am currently creating the enemy phase AI for a fire emblem like game. In fire emblem there is an enemy phase where all of the enemies move on that turn. I came up with two approaches and wanted to see if there is any recommendations on how to do this.

Approach 1:

  1. Find a map of all permutations with location of the attacker as key and target entity as value
  2. Simulate the battle on the gamestate. For every possible outcome of the battle create a new gamestate (if attack misses/crits etc)
  3. Keeping increasing in depth until run out of time which is about 2-3 seconds.

Approach 2:

  1. Find a map of all permutations with location of the attacker as key and target entity as value
  2. Simulate the battle on the gamestate. Calculated the expected value by multiplying the probability.
  3. Keeping increasing in depth until run out of time which is about 2-3 seconds.

Basically its a difference in step 2, where it will either be bruteforcing the exact gamestates or estimating the expected gamestate. I'm leaning towards Approach 2 being better as im guessing it reduces the breadth scaling significantly allowing it to go 1 or 2 more depth levels.

The problem is it would literally be simulating impossible gamestates like if there was a 50% crit chance and 10 damage (3x damage on crit) it would do 20 damage even though that's impossible. I think its fine but want to double check what others think.

0 Comments
2024/11/17
10:08 UTC

11

How do I learn this?

So I recently came across this website https://ncase.me/trust/ and got to know about game theory from that.

I want to learn more about it. Are there any more fun sites like that. Where can I find resources to learn game theory from the very beginning?

3 Comments
2024/11/16
11:34 UTC

1

Problems with understanding utility functions

Hello!

I am an International Relations undergrad diving into game theory. I started my journey into the subject after trying to read "Are Sanctions Effective? A Game-Theoretic Analysis" by Tsebelis - 1990. The title is self-explanatory. In this paper, he lays out a few assumptions about preferneces that I'll post in the form of an image, and gives the reader the normal 2x2 representation of the game. After that, he goes into a scenario of sanctions as a game with simultaneous moves, complete information, rationality and continuous choices. The continuous choices part simply means players (target and sender of sanctions) get to decide how much violating rules (x between 0 and 1) and how much sanctioning (y between 0 and 1) they will do.
My first problem is with the utility functions u_1 and u_2. First of all, how does he even generate them? I have never seen the utility function of an entire player like that, only the utility of a strategy. Second, how are there four different terms in that utility function? Third in u_1 (the target's function), I don't understand why you would subtract d_1 from c_1, since being sanctioned (c) is obviously worse than not being sanctioned (d).

Am I missing a fundamental aspect of simultanous move games and utility functions? Below are the images with assumptions about preferences and the table:

(I tried having chatgpt explain it to me but still didn't understand)

https://preview.redd.it/e896lrjyb51e1.png?width=625&format=png&auto=webp&s=f06a03db0929f2c5a2ec75e66acc5184a5a35a33

https://preview.redd.it/pk2b2vc1c51e1.png?width=573&format=png&auto=webp&s=98e2cf6a30cb18b9bc6919bc14a6b5f97eee2474

Thanks in advance for anyone willing to help this old chunk of coal with game theory.

5 Comments
2024/11/15
23:08 UTC

1

Trouble Solving for Nash Equilibria using Maxima

I made a tool for analyzing payoff matrices and I was attempting to test it out with the problem recently posed here: https://www.reddit.com/r/GAMETHEORY/comments/1grtm9m/finding_best_response_in_3_player_kingmaker_game/

Here's my representation of the game:

https://i.imgur.com/f2klW4u.png

When I attempt to solve it in Maxima (using the system of equations that my tool spits out), I got no solution:

solve([
    ((σ_1b + σ_1c) = 1),
    (((σ_2d + σ_2e) + σ_2f) = 1),
    (((σ_3x + σ_3y) + σ_3z) = 1),
    (U_1 = ((((((((((1 * σ_2d) * σ_3x) + ((1 * σ_2d) * σ_3y)) + ((1 * σ_2d) * σ_3z)) + ((0 * σ_2e) * σ_3x)) + ((0 * σ_2e) * σ_3y)) + ((0 * σ_2e) * σ_3z)) + ((2 * σ_2f) * σ_3x)) + ((2 * σ_2f) * σ_3y)) + ((2 * σ_2f) * σ_3z))),
    (U_1 = ((((((((((1 * σ_2d) * σ_3x) + ((0 * σ_2d) * σ_3y)) + ((2 * σ_2d) * σ_3z)) + ((1 * σ_2e) * σ_3x)) + ((0 * σ_2e) * σ_3y)) + ((2 * σ_2e) * σ_3z)) + ((1 * σ_2f) * σ_3x)) + ((0 * σ_2f) * σ_3y)) + ((2 * σ_2f) * σ_3z))),
    (U_2 = (((((((σ_1b * 0) * σ_3x) + ((σ_1b * 0) * σ_3y)) + ((σ_1b * 0) * σ_3z)) + ((σ_1c * 0) * σ_3x)) + ((σ_1c * 2) * σ_3y)) + ((σ_1c * 1) * σ_3z))),
    (U_2 = (((((((σ_1b * 2) * σ_3x) + ((σ_1b * 2) * σ_3y)) + ((σ_1b * 2) * σ_3z)) + ((σ_1c * 0) * σ_3x)) + ((σ_1c * 2) * σ_3y)) + ((σ_1c * 1) * σ_3z))),
    (U_2 = (((((((σ_1b * 1) * σ_3x) + ((σ_1b * 1) * σ_3y)) + ((σ_1b * 1) * σ_3z)) + ((σ_1c * 0) * σ_3x)) + ((σ_1c * 2) * σ_3y)) + ((σ_1c * 1) * σ_3z))),
    (U_3 = (((((((σ_1b * σ_2d) * 2) + ((σ_1b * σ_2e) * 1)) + ((σ_1b * σ_2f) * 0)) + ((σ_1c * σ_2d) * 2)) + ((σ_1c * σ_2e) * 2)) + ((σ_1c * σ_2f) * 2))),
    (U_3 = (((((((σ_1b * σ_2d) * 2) + ((σ_1b * σ_2e) * 1)) + ((σ_1b * σ_2f) * 0)) + ((σ_1c * σ_2d) * 1)) + ((σ_1c * σ_2e) * 1)) + ((σ_1c * σ_2f) * 1))),
    (U_3 = (((((((σ_1b * σ_2d) * 2) + ((σ_1b * σ_2e) * 1)) + ((σ_1b * σ_2f) * 0)) + ((σ_1c * σ_2d) * 0)) + ((σ_1c * σ_2e) * 0)) + ((σ_1c * σ_2f) * 0)))
],[
    U_1,U_2,U_3,σ_1b,σ_1c,σ_2d,σ_2e,σ_2f,σ_3x,σ_3y,σ_3z
]), numer;

https://i.imgur.com/ATvyoyG.png

However, for other (similar, 3-player) games, I am able to get a solution:

https://i.imgur.com/4BIzeVo.png

Is this system of equations unsolvable? Is this a limitation in Maxima? Or perhaps I am forming the system of equalities incorrectly?

3 Comments
2024/11/15
22:04 UTC

4

Finding best response in 3 player Kingmaker game

I’m confident in finding the best response in a two player game but unsure on how to approach it when it’s a 3 player kingmaker game. Would like some advice or guidance for part a please.

3 Comments
2024/11/15
10:42 UTC

1

Capstone project

Hi, not a game theorist but im looking for some advice. For some background, im a Highschool sophomore looking into graduating 2 years early but one of the requirements is a capstone project. Ive always thought game theory is interesting so I’ve decided to try and do a project that models which colleges would be best to apply to with early decision (binding) based on game theory in order to learn more about the field. I am currently taking AP Calculus BC and have self studied linear algebra to an extent (at least up to eigenvalues, eigenvectors, etc) and know differential equations (up to the 2nd degree linear homogeneous kind) in case I need some math background.

I would like to know if its possible to model which type of colleges would be best to apply to with early decision with game theory. Some things to consider about the situation is the risk of applying for a college with early decision and being rejected, the prestige of the college, applying for a non optimal college with early decision while having better options and being forced to go, and many others. If it is possible, where do I need to start in terms of learning game theory and modeling the problem? Do I need to catch up on some other math fields before this? I have multiple months to do the project so time is not a major concern. Any advice would be appreciated. (Edit, I neglected to mention that other applicants could by represented as the other players in this situation with their choices possibly affecting others chances of getting in)

2 Comments
2024/11/14
18:30 UTC

0

I found some text in a Roblox game called fisch. And I had a feeling it was a Caesar cipher that needed to be used and I figured it out. I just need help finding anything else on this and understanding what it means. The text reads as follows, “don’t stray to far.”

1 Comment
2024/11/12
17:25 UTC

3

Golfing trio dilemma

Imagine a team of three golfers competing together. While golf is an individual sport, the team’s overall success depends on everyone’s performance. Each golfer has two goals:

1.	Avoid Last Place: Each player wants to avoid being ranked last within the team.
2.	Help the Team: To boost the team’s overall score, each golfer can share insights or tips that could improve their teammates’ performance.

The dilemma? If a golfer shares too much information, they might help others perform better and potentially push themselves lower in the rankings. Each golfer has to decide how much to share to balance personal success with team success.

Question: Given these incentives, what’s the best strategy for each golfer to balance sharing and individual performance? Should they share everything they know to help the team or hold back just enough to avoid being last? What would you do to create the optimal balance between personal success and team performance?

4 Comments
2024/11/12
12:04 UTC

0

Game theory help needed

Hi guys, so Im currently doing a game theory question and am kind of stuck. So I have a non symmetric zero sum game that I need to find the mixed strategy on but I cant find any vids teaching me how to do that(esp cause its non symmetric)

It looks something like this

    L                   C                   R

L (0.55, 0.45) ( 0.8,0.2) (0.9, 0.1)

C ( 0.9, 0.1) (0.1, 0.9) (0.8, 0.2)

R ( 0.9, 0.1) (0.8,0.2) (0.45, 0.55)

I have tried to use EU^L = EU^C = EU^R for the expected returns of player 1(using ō to denote sigma

Where EU^L = ōL(0.55) + ōC (0.8) + (1 - ōL - ōC)(0.9) =1-0.35ōL -0.1ōC

And so on and so forth for the other 2

So just an example of what I mean above in case I wrote something wrong (using ō to denote sigma)

Is EU^L = EU^C

1 - 0.35(ōL) - 0.1(ōC) = 1 + 0.1(ōL) -0.7(ōC)

Am I on the right track? Im not even sure if this is correct for the non symmetric games and if it is, im still rather confused on how to go about solving this. So if someone out there knows what im talking about, would appreciate some help. I know this is a long read, so Thank you!

4 Comments
2024/11/11
13:46 UTC

2

How to do proofs related to winning, drawing and losing strategies?

I am struggling to do proofs like "show player A has a drawing strategy", etc. The games / situations vary a lot, and I am not able to think of a general method to tackle these problems.

Are there any resourses for me to practice on? And if possible, can anyone please share their experiences? Thanks!

1 Comment
2024/11/11
03:38 UTC

Back To Top