The purpose of “underdog bias” is nearly the opposite of your best guess. It is because conflicts are too complicated for most people to model, and optional to get into. Even after several million years of evolution making brains smarter, humans still usually fail to see more than zero turns ahead in very simple games like Risk (e.g., if I break his bonus, and he goes right after me… well I can break his bonus now! Let’s do it!). If you can’t accurately model the effects of starting a conflict, but you’re also prone to getting into conflicts you think you can win (thanks evolution), the best hack is to make you believe you won’t win.
Why do I believe this? Well I’ve seen this evolution in Risk. Newer players will overattack, using all their troops on the first few turns to take territories and break bonuses. They slowly evolve into turtles, keeping all their troops in one stack blocked by their own territories so they couldn’t do anything even if they wanted to, and only ever attacking one territory at a time. This is where most players stop their evolution, because after learning zeroth-order heuristics like, “the world is scary, better be super conservative,” the only way to further progress is to start modelling conflicts more than zero turns ahead.
Seems weird to posit that evolution performed a hack to undermine an instinct that was, itself, evolved. If getting into conflicts that you think you can win is actually bad, why did that instinct evolve in the first place? And if it’s not bad, why did evolution need to undermine it in such a general-purpose way?
I can imagine a story along the lines of “it’s good to get into conflicts when you have a large advantage but not when you have a small advantage”, but is that really so hard to program directly that it’s better to deliberately screw up your model of advantage just so that the rule can be simplified to “attack when you have any advantage”? Accurate assessment seems pretty valuable, and evolution seems to have created behaviors much more complicated than “attack when you have a large advantage”.
I agree that humans aren’t very good at reasoning about how other players will react and how this should affect their own strategy, but I don’t think that explains why they would have evolved one strategy that’s not that vs another strategy that’s not that.
(Also, I don’t think Risk is a very good example of this. It’s a zero-sum game, so it’s mostly showing relative ability, not absolute ability. Also, the game is far removed from the ancestral environment and sending you a lot of fake signals (the strategies appropriate to the story the game is telling are mostly not appropriate to the abstract rules the game actually runs on), so it seems unsurprising to me that humans would tend to be bad at predicting behavior of other humans in this context. The rules are simple, but that’s not the kind of simplicity that would make me expect humans-without-relevant-experience to make good predictions about how things will play out.)
Millions of years ago, the world was pretty much zero sum. Animals weren’t great at planning, such as going back for reinforcements or waiting months to take revenge, so fights were brief affairs determined mostly by physical prowess, which wasn’t too hard to predict ahead of time. It was relatively easy to tell when you can get away with bullying a weaker animal for food, instead of hunting for your own.
When humans come along, with tools and plans, there is suddenly much less common knowledge when you get into a fight. What allies does this other human have to call upon? What weapons have they trained in? If they’re running away, are they just weaker, or are they leading you into a trap? If you actually can win the fight, you should take it, but the variance has shot up due to the unknowns so you need a higher expected chance of winning if you don’t want an unlucky roll to end your life. If you enter fights when you instictively feel you can win, then you will evolve to lower this instictual confidence.
Agree that other players having tools, social connections, and intelligence in general all make it much harder to judge when you have the advantage. But I don’t see how this answers the question of “why create underdog bias instead of just increasing the threshold required to attack?”
Strong disagree on the ancient world being zero-sum. A lion eating an antelope harms the antelope far more than it helps the lion. Thog murdering Mog to steal Mog’s meal harms Mog far more than it helps Thog. I think very little in nature is zero-sum.
The purpose of “underdog bias” is nearly the opposite of your best guess. It is because conflicts are too complicated for most people to model, and optional to get into. Even after several million years of evolution making brains smarter, humans still usually fail to see more than zero turns ahead in very simple games like Risk (e.g., if I break his bonus, and he goes right after me… well I can break his bonus now! Let’s do it!). If you can’t accurately model the effects of starting a conflict, but you’re also prone to getting into conflicts you think you can win (thanks evolution), the best hack is to make you believe you won’t win.
Why do I believe this? Well I’ve seen this evolution in Risk. Newer players will overattack, using all their troops on the first few turns to take territories and break bonuses. They slowly evolve into turtles, keeping all their troops in one stack blocked by their own territories so they couldn’t do anything even if they wanted to, and only ever attacking one territory at a time. This is where most players stop their evolution, because after learning zeroth-order heuristics like, “the world is scary, better be super conservative,” the only way to further progress is to start modelling conflicts more than zero turns ahead.
Seems weird to posit that evolution performed a hack to undermine an instinct that was, itself, evolved. If getting into conflicts that you think you can win is actually bad, why did that instinct evolve in the first place? And if it’s not bad, why did evolution need to undermine it in such a general-purpose way?
I can imagine a story along the lines of “it’s good to get into conflicts when you have a large advantage but not when you have a small advantage”, but is that really so hard to program directly that it’s better to deliberately screw up your model of advantage just so that the rule can be simplified to “attack when you have any advantage”? Accurate assessment seems pretty valuable, and evolution seems to have created behaviors much more complicated than “attack when you have a large advantage”.
I agree that humans aren’t very good at reasoning about how other players will react and how this should affect their own strategy, but I don’t think that explains why they would have evolved one strategy that’s not that vs another strategy that’s not that.
(Also, I don’t think Risk is a very good example of this. It’s a zero-sum game, so it’s mostly showing relative ability, not absolute ability. Also, the game is far removed from the ancestral environment and sending you a lot of fake signals (the strategies appropriate to the story the game is telling are mostly not appropriate to the abstract rules the game actually runs on), so it seems unsurprising to me that humans would tend to be bad at predicting behavior of other humans in this context. The rules are simple, but that’s not the kind of simplicity that would make me expect humans-without-relevant-experience to make good predictions about how things will play out.)
Millions of years ago, the world was pretty much zero sum. Animals weren’t great at planning, such as going back for reinforcements or waiting months to take revenge, so fights were brief affairs determined mostly by physical prowess, which wasn’t too hard to predict ahead of time. It was relatively easy to tell when you can get away with bullying a weaker animal for food, instead of hunting for your own.
When humans come along, with tools and plans, there is suddenly much less common knowledge when you get into a fight. What allies does this other human have to call upon? What weapons have they trained in? If they’re running away, are they just weaker, or are they leading you into a trap? If you actually can win the fight, you should take it, but the variance has shot up due to the unknowns so you need a higher expected chance of winning if you don’t want an unlucky roll to end your life. If you enter fights when you instictively feel you can win, then you will evolve to lower this instictual confidence.
Agree that other players having tools, social connections, and intelligence in general all make it much harder to judge when you have the advantage. But I don’t see how this answers the question of “why create underdog bias instead of just increasing the threshold required to attack?”
Strong disagree on the ancient world being zero-sum. A lion eating an antelope harms the antelope far more than it helps the lion. Thog murdering Mog to steal Mog’s meal harms Mog far more than it helps Thog. I think very little in nature is zero-sum.