Most skills compound—the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).
The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it’s a lot easier to master the basics than to become an expert. (Question: did Kasparov’s chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don’t see how that could possibly cause systematic underestimation and compensating sunk cost bias.
On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.
Mentioned in essay.
I believe this is the common “objection” to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn’t include the nuances that a complete function would.
One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.
The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it’s a lot easier to master the basics than to become an expert. (Question: did Kasparov’s chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don’t see how that could possibly cause systematic underestimation and compensating sunk cost bias.
Mentioned in essay.
One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.
Thanks