Each normal person’s normalized utility without hearing the symphony is 0.99999. Hearing the symphony would make it 1.00000.
The Beethoven utility monster would be at 0 without hearing the symphony and 1 hearing it.
Thus, if we directly sum normalized utilities, it’s better for the Beethoven utility monster to hear the symphony than for 90,000 regular people to do the same.
This seems suspicious.
Am I the only one who doesn’t find this suspicious at all? After all, the Beethoven utility monster would gain 100,000 times as much fulfillment from the symphony as the normal people; it makes intuitive sense to me that it would be unfair to deny the BUM the opportunity to hear Beethoven’s ninth just so that, say, 100 normal people could hear it. After all, those people wouldn’t be that much worse off not having heard the symphony, which the BUM would rather die than not hear.
Obviously this intuition breaks down in a lot of similar thought experiments (should we let the BUM run over pedestrians in the road on its way to Carnegie Hall? etc.) but if the goal is to show that summing normalized utility can give undesirable or unintuitive results, that particular thought experiment isn’t really ideal.
An agent’s revealed preferences are distinct from the agent’s feelings of desire. The Beethoven monster can be seen to risk its own life to hear Beethoven, but that doesn’t mean it has a strong feeling of desire. The data could just as well be explained by a lack of strong desire to keep living. Or the agent could lack any emotions we would call “desire” or “desperation”. In the latter two cases, the argument doesn’t seem clear to me.
The BUM would have a pretty high bar of evidence to meet to prove that running over one pedestrian was really necessary to reach the only B9 performance it could ever get to. By which I mean, it won’t be able to establish that.
Am I the only one who doesn’t find this suspicious at all? After all, the Beethoven utility monster would gain 100,000 times as much fulfillment from the symphony as the normal people; it makes intuitive sense to me that it would be unfair to deny the BUM the opportunity to hear Beethoven’s ninth just so that, say, 100 normal people could hear it. After all, those people wouldn’t be that much worse off not having heard the symphony, which the BUM would rather die than not hear.
Obviously this intuition breaks down in a lot of similar thought experiments (should we let the BUM run over pedestrians in the road on its way to Carnegie Hall? etc.) but if the goal is to show that summing normalized utility can give undesirable or unintuitive results, that particular thought experiment isn’t really ideal.
An agent’s revealed preferences are distinct from the agent’s feelings of desire. The Beethoven monster can be seen to risk its own life to hear Beethoven, but that doesn’t mean it has a strong feeling of desire. The data could just as well be explained by a lack of strong desire to keep living. Or the agent could lack any emotions we would call “desire” or “desperation”. In the latter two cases, the argument doesn’t seem clear to me.
Me either. Basically other people in the example do not want to hear Beethoven all that much, they have other priorities.
Not if their utility of being run over is close to 0 …
The BUM would have a pretty high bar of evidence to meet to prove that running over one pedestrian was really necessary to reach the only B9 performance it could ever get to. By which I mean, it won’t be able to establish that.
So, no.