Does the paradox go away if we set U(death) = -∞ utilons (making any increase in the chance of dying in the next hour impossible to overcome)? Does that introduce worse problems?
However, this doesn’t describe people’s actual utility functions- people crossing the road shows they’re willing to take a small risk of death for other rewards.
I think this needs a bit of refinement, but it might work. Humans have a pretty strong immediacy bias; a greater than 0.1% chance of dying in the next hour really gets our attention. Infinity is way too strong; people do stand their ground on battlefields and such. But certainly you can assign a vast negative utility to that outcome as a practical description of how humans actually think, rather than as an ideal utility function describing how we ought to think.
Does the paradox go away if we set U(death) = -∞ utilons (making any increase in the chance of dying in the next hour impossible to overcome)? Does that introduce worse problems?
but U(death in bignum years) would also be—infinity utilions then, right?
This problem was explicitly constructed as “living a long time and then dying vs living a short time and then dying.”
However, this doesn’t describe people’s actual utility functions- people crossing the road shows they’re willing to take a small risk of death for other rewards.
I think this needs a bit of refinement, but it might work. Humans have a pretty strong immediacy bias; a greater than 0.1% chance of dying in the next hour really gets our attention. Infinity is way too strong; people do stand their ground on battlefields and such. But certainly you can assign a vast negative utility to that outcome as a practical description of how humans actually think, rather than as an ideal utility function describing how we ought to think.