Sorry if this is confusing. What I’m saying is, you have some estimate of the project’s valuation, and this factors in the information that you expect to get in the future about the project’s valuation (cf. Conservation of Expected Evidence). If there’s some chance the project will turn out worthwhile, you know that chance already. But there must also be some counterbalancing chance that the project will turn out even less worthwhile than you think.
I still don’t understand. Your valuation of the project will still change over time as information actually gets revealed though. The probability the project will turn out worthwhile can fluctuate.
At any given point, you have some probability distribution over how worthwhile the project will be. The distribution can change over time, but it can change either for better or for worse. Therefore, at any point, if a rational agent expects it not to be worthwhile to expend the remaining effort to get the result, they should stop.
Of course, if you are irrational and intentionally fail to account for evidence as a way of getting out of work, this does not apply, but that’s the problem then, not your lack of sunk costs.
I don’t disagree with what you’re saying about theoretically rational agents. I think the content of my post was [there are a bunch of circumstances in which humans are systematically irrational, sunk cost fallacy is on net a useful corrective heuristic in those circumstances. Attempting to make rational decisions via explicit legible calculations will in practice underperform just following the heuristic.]
To spell out a bit more, imagine my mood swings cause a large random error term to be added to all explicit calculations. Then if the decision process is to drop a project altogether at any point where my calculations say the project is doomed, then I will drop a lot of projects that are not actually doomed.
I agree with you on this, but I also don’t think “sunk cost fallacy” isn’t the right word to describe what you’re saying. The rational behavior here is to factor in the existence of a random error term resulting from mood swings into these calculations, and if you can’t fully factor it in, then generally err on the side of keeping projects going. I understand “sunk cost fallacy” to mean “factoring in the amount of effort already spent into these decisions,” which does seem like a pure fallacy to me.
It’s reasonable e.g. when about to watch a movie to say “I’m in a bad mood, I don’t know how bad a mood I’m in, so even though I think the movie’s not worth watching, I’ll watch it anyway because I don’t trust my assessment and I decided to watch it when in a calmer state of mind.” Sunk cost fallacy is where you treat it differently if you bought yourself the tickets versus if they were given to you as a gift, which does seem, even in your apology for “sunk cost fallacy,” to remain a fallacy.
Sorry if this is confusing. What I’m saying is, you have some estimate of the project’s valuation, and this factors in the information that you expect to get in the future about the project’s valuation (cf. Conservation of Expected Evidence). If there’s some chance the project will turn out worthwhile, you know that chance already. But there must also be some counterbalancing chance that the project will turn out even less worthwhile than you think.
I still don’t understand. Your valuation of the project will still change over time as information actually gets revealed though. The probability the project will turn out worthwhile can fluctuate.
At any given point, you have some probability distribution over how worthwhile the project will be. The distribution can change over time, but it can change either for better or for worse. Therefore, at any point, if a rational agent expects it not to be worthwhile to expend the remaining effort to get the result, they should stop.
Of course, if you are irrational and intentionally fail to account for evidence as a way of getting out of work, this does not apply, but that’s the problem then, not your lack of sunk costs.
I don’t disagree with what you’re saying about theoretically rational agents. I think the content of my post was [there are a bunch of circumstances in which humans are systematically irrational, sunk cost fallacy is on net a useful corrective heuristic in those circumstances. Attempting to make rational decisions via explicit legible calculations will in practice underperform just following the heuristic.]
To spell out a bit more, imagine my mood swings cause a large random error term to be added to all explicit calculations. Then if the decision process is to drop a project altogether at any point where my calculations say the project is doomed, then I will drop a lot of projects that are not actually doomed.
I agree with you on this, but I also don’t think “sunk cost fallacy” isn’t the right word to describe what you’re saying. The rational behavior here is to factor in the existence of a random error term resulting from mood swings into these calculations, and if you can’t fully factor it in, then generally err on the side of keeping projects going. I understand “sunk cost fallacy” to mean “factoring in the amount of effort already spent into these decisions,” which does seem like a pure fallacy to me.
It’s reasonable e.g. when about to watch a movie to say “I’m in a bad mood, I don’t know how bad a mood I’m in, so even though I think the movie’s not worth watching, I’ll watch it anyway because I don’t trust my assessment and I decided to watch it when in a calmer state of mind.” Sunk cost fallacy is where you treat it differently if you bought yourself the tickets versus if they were given to you as a gift, which does seem, even in your apology for “sunk cost fallacy,” to remain a fallacy.