“Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing.”
Because we aren’t psychic and can only guess expected payoffs. Why would I hypothesize that we underestimate expected payoffs for persistence rather than the reverse? Two reasons—or assumptions, I suppose. 1. Most skills compound—the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).
“Persistence beyond ‘the frustration barrier’ may lead to outcomes like ‘I am the Japanese Pog-collecting champion of the world.‘”
Yes, but the activity one persists in/with is a completely separate issue, so I feel you can just assume ‘for activities that reasonably seem likely to yield large benefit’.
On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.
Oh, sure, if you’re extra careful, you would take that into account in your utility function. You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don’t occur to us.
I mean, consider morality. It’s so easy to say that moral rules have plenty of exceptions and so arrive at a decision that breaks one or more of these rules (and not for simple reason of internal inconsistency). But this may be bad overall for society. You might arrive at a local maximum of overall good, but a global maximum would require strict adherence to moral rules. I believe this is the common “objection” to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn’t include the nuances that a complete function would.
Most skills compound—the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).
The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it’s a lot easier to master the basics than to become an expert. (Question: did Kasparov’s chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don’t see how that could possibly cause systematic underestimation and compensating sunk cost bias.
On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.
Mentioned in essay.
I believe this is the common “objection” to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn’t include the nuances that a complete function would.
One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.
“Sure, but why do you expect people to systematically err in judging when it is time to abandon a project? Unless you have a reason for this, this is buck-passing.”
Because we aren’t psychic and can only guess expected payoffs. Why would I hypothesize that we underestimate expected payoffs for persistence rather than the reverse? Two reasons—or assumptions, I suppose. 1. Most skills compound—the better we get, the faster we can get better. And humans are bad at estimating compounded effects, which is why Americans on the whole find themselves surprised at how much their debt has grown. 2. The better you get, the fewer competitors you have, and thus the more valuable your skill is, disproportionate to absolute skill level (a separate compounding effect).
“Persistence beyond ‘the frustration barrier’ may lead to outcomes like ‘I am the Japanese Pog-collecting champion of the world.‘” Yes, but the activity one persists in/with is a completely separate issue, so I feel you can just assume ‘for activities that reasonably seem likely to yield large benefit’.
On a separate note, the sunk cost fallacy may not be a fallacy because it fails to take into account the social stigma of leaving projects incomplete versus completing them.
Oh, sure, if you’re extra careful, you would take that into account in your utility function. You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don’t occur to us.
I mean, consider morality. It’s so easy to say that moral rules have plenty of exceptions and so arrive at a decision that breaks one or more of these rules (and not for simple reason of internal inconsistency). But this may be bad overall for society. You might arrive at a local maximum of overall good, but a global maximum would require strict adherence to moral rules. I believe this is the common “objection” to utilitarianism and why hardly anyone (other than a LWer) professes to be utilitarian. Because how we actually think of utility functions doesn’t include the nuances that a complete function would.
The first is not true at all; graphs of expertise follow what looks like logarithmic curves, because it’s a lot easier to master the basics than to become an expert. (Question: did Kasparov’s chess skill increase faster from novice to master status, or from grandmaster to world champion?) #2 may be true, but everyone can see that effect so I don’t see how that could possibly cause systematic underestimation and compensating sunk cost bias.
Mentioned in essay.
One objection, and why variants like rule utilitarianism exist and act utilitarians emphasize prudence since we are bounded rational agents and not logical omniscient utility maximizers.
Thanks