Prisoner’s dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.
There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can’t cooperate to split calories between each other in a way that benefits both. In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can’t be trivially achieved is the prisoner’s dilemma.
Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications—arguably more than the prisoner’s dilemma, though that probably depends on what kind of life you happen to find yourself in. I don’t know why they don’t get much exposure on LW, but it might have something to do with the fact that they don’t have the PD’s historical links to AI.
I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner’s dilemma.
In order to “solve” a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices. In a Prisoner’s dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.
I’m not aware of these links, do you have a reference?
Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents. I wrote a few in school, though I was better at market simulations. Believe LW ran a PD tournament at some point, too, though I didn’t participate in that one.
I understand that the prisoner’s dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it’s rare in normal life and that most choices people actually make are not in this framework.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.
Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this—human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)
Prisoner’s dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.
There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can’t cooperate to split calories between each other in a way that benefits both.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can’t be trivially achieved is the prisoner’s dilemma.
Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications—arguably more than the prisoner’s dilemma, though that probably depends on what kind of life you happen to find yourself in. I don’t know why they don’t get much exposure on LW, but it might have something to do with the fact that they don’t have the PD’s historical links to AI.
I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner’s dilemma.
In order to “solve” a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices.
In a Prisoner’s dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.
There are other games that have significant practical applicability, such as Chicken/Volunteer’s dilemma and Ultimatum.
I’m not aware of these links, do you have a reference?
Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents. I wrote a few in school, though I was better at market simulations. Believe LW ran a PD tournament at some point, too, though I didn’t participate in that one.
I believe it’s because it is at the same time very simple to explain and very interesting.
I think they ran two variations of program-equilibrium PD. I participated in the last one.
I understand that the prisoner’s dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it’s rare in normal life and that most choices people actually make are not in this framework.
Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.
Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this—human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)