For ideal agents, absolutely. For things like humans… Have you looked at the models in “Do Sunk Costs Matter?”, McAfee et al 2007?
EDIT: I’ve incorporated all the relevant bits of McAfee now, and there are one or two other papers looking at sunk cost-like models where the behavior is useful or leads to better equilibria.
There are better ways of making credible commitments than having a tendency to commit sunk cost fallacy.
For ideal agents, absolutely. For things like humans… Have you looked at the models in “Do Sunk Costs Matter?”, McAfee et al 2007?
EDIT: I’ve incorporated all the relevant bits of McAfee now, and there are one or two other papers looking at sunk cost-like models where the behavior is useful or leads to better equilibria.
While that may be true, I don’t see how it has any consequences.
Of course. But what works, works; you’d cripple an agent by dispelling it’s fallacies without providing alternatives.