There was an article in Scientific American a few years ago about the Traveler’s Dilemma and how human beings make more money than the Nash Equilibrium tells them to. Edit: Wikipedia summary
It occurred to me that the percentage fallacy might explain why people give high numbers in this version—the Nash equilibrium is pocket change compared to the max payoff. The same is true for the reward for undercutting; you might not be so motivated to low-ball if your reward for doing so is 2% of the max payoff.
It would be interesting to see an experiment where the payoff for giving the low estimate varied. If you were playing the game with a $10 bonus for lowballing, would you give the Nash equilibrium of $10? Or would you maybe go for the $40s or $50s hoping the other person would go even higher? My guess would be that as the reward for undercutting as a percentage of the max reward increases people get more and more vicious, and at some percentage people will default to the Nash equilibrium.
Hi LW,
My name’s Dan LaVine. I forget exactly how I got linked here, but I haven’t been able to stop following internal links since.
I’m not an expert in anything, but I have a relatively broad/shallow education across mathematics and the sciences and a keen interest in philosophical problems (not quite as much interest in traditional approaches to the problems). My tentative explorations of these problems are broadly commensurate with a lot of the material I’ve read on this site so far. Maybe that means I’m exposing myself to confirmation bias, but so far I haven’t found anywhere else where these ideas or the objections to them are developed to the degree they are here.
My aim in considering philosophical problems is to try to understand the relationship between my phenomenal experience and whatever causes it may have. Of course, it’s possible that my phenomenal experience is uncaused, but I’m going to try to exhaust alternative hypotheses before resigning myself to an entirely senseless universe. Which is how I wind up as a rationalist—I can certainly consider such possibilities as the impossibility of knowledge, that I might be a Boltzmann brain, that I live in the Matrix, etc., but I can’t see any way to prove or provide evidence of these things, and if I take the truth of any of them as foundational to my thinking, it’s hard to see what I could build on top of them.
Looking forward to reading a whole lot more here. Hopefully, I’ll be able to contribute at least a little bit to the discussion as well.