Perhaps the sunk cost fallacy is useful because without it you’re prone to switch projects as soon as a higher-value project comes along, leaving an ever-growing heap of abandoned projects behind you.
There’s actually some literature on justifying the sunk cost fallacy, pointing to the foregone learning of switching. (I should finish my essay on the topic; one of my examples was going to be ‘imagine a simple AI which avoids sunk cost fallacy by constantly switching tasks...’)
The thing is, an AI wouldn’t need to feel a sunk cost effect. It would act optimally simply by maximising expected utility.
For example, say that I’m decide to work on Task A, which will take me five hours and will earn me $200. After two hours of work, I discover Task B which will award me $300 after five hours. At this point, I can behave like a human, and feel bored and annoyed, but the sunk cost effect will make me continue, maybe. Or I can calculate expected return: I’ll get $200 after 3 hours of work on Task A, which is %67 per hour, wheras I’ll get $300 after 5 hours on Task B, which is $60 per hour. So the rational thing to do is to avoid switching.
The sunk cost fallacy reflects that after putting work into something, the wage for continuing work decreases. An AI wouldn’t need that to act optimally.
One of my points is that you bury a great deal of hidden complexity and intelligence in ‘simply maximize expected utility’; it is true sunk cost is a fallacy in many simple fully-specified models and any simple AI can be rescued just by saying ‘give it a longer horizon! more computing power! more data!’, but do these simple models correspond to the real world?
(See also the question of whether exponential discounting rather than hyperbolic discounting is appropriate, if returns follow various random walks rather than remain constant in each time period.)
You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.
What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.
First image link is broken; I see what you mean in the second. Could it be your browser doesn’t accept CSS3 at all? Do the sausages ever disappear as you keep narrowing the window width?
There’s actually some literature on justifying the sunk cost fallacy, pointing to the foregone learning of switching. (I should finish my essay on the topic; one of my examples was going to be ‘imagine a simple AI which avoids sunk cost fallacy by constantly switching tasks...’)
EDIT: you can see my essay at http://www.gwern.net/Sunk%20cost
Why would an AI have the sunk cost fallacy at all? Aren’t you anthropomorphizing?
No, his example points out what an AI that specifically does not have the sunk cost fallacy is like.
The thing is, an AI wouldn’t need to feel a sunk cost effect. It would act optimally simply by maximising expected utility.
For example, say that I’m decide to work on Task A, which will take me five hours and will earn me $200. After two hours of work, I discover Task B which will award me $300 after five hours. At this point, I can behave like a human, and feel bored and annoyed, but the sunk cost effect will make me continue, maybe. Or I can calculate expected return: I’ll get $200 after 3 hours of work on Task A, which is %67 per hour, wheras I’ll get $300 after 5 hours on Task B, which is $60 per hour. So the rational thing to do is to avoid switching.
The sunk cost fallacy reflects that after putting work into something, the wage for continuing work decreases. An AI wouldn’t need that to act optimally.
One of my points is that you bury a great deal of hidden complexity and intelligence in ‘simply maximize expected utility’; it is true sunk cost is a fallacy in many simple fully-specified models and any simple AI can be rescued just by saying ‘give it a longer horizon! more computing power! more data!’, but do these simple models correspond to the real world?
(See also the question of whether exponential discounting rather than hyperbolic discounting is appropriate, if returns follow various random walks rather than remain constant in each time period.)
You neglected the part where the AI may stand to learn something from the task, which may have a large expected value relative to the tasks themselves.
Yeah, but that comes under expected utility.
What else are you optimising besides utility? Doing the calculations with the money can tell you the expected money value of the tasks, but unless your utility function is U=$$$, you need to take other things into account.
Off-topic, but...
I like how you have the sections on the side of your pages. Looks good (and works reasonably well)!
Thanks. It was a distressing amount of work, but I hoped it’d make up for it by keeping readers oriented.
Yep, it seems to. :)
(Bug report: the sausages overlap the comments (e.g. here), maybe just a margin-right declaration in the CSS for that div?)
I don’t see it. When I halve my screen, the max-width declaration kicks in and the sausages aren’t visible at all.
Hmm, peculiar...
Here is what I see: 1 2 (the last word of the comment is cut off).
First image link is broken; I see what you mean in the second. Could it be your browser doesn’t accept CSS3 at all? Do the sausages ever disappear as you keep narrowing the window width?
(Not sure what happened to that link, sorry. It didn’t show anything particularly different to the other one though)
Those screenshots are Firefox nightly (so bleeding edge CSS3 support) but chrome stable shows a similar thing (both on Linux).
Yes, the sausages do disappear if the window is thin enough.