I suppose you could try the abstract route. What sort of properties would cause a utility-maximizing agent to be okay with dying? What sort of utility functions could lead to an agent choosing, say, $500 and a 100 year lifespan over immortality? What sort of agent could extract an infinite amount of utility from living an infinite life? What sort of agent would only get a finite amount of utility from a finite life?
These problems are a bit tricky.
And then of course the subjective part. Which agent are you most like?
I suppose you could try the abstract route. What sort of properties would cause a utility-maximizing agent to be okay with dying? What sort of utility functions could lead to an agent choosing, say, $500 and a 100 year lifespan over immortality? What sort of agent could extract an infinite amount of utility from living an infinite life? What sort of agent would only get a finite amount of utility from a finite life?
These problems are a bit tricky.
And then of course the subjective part. Which agent are you most like?