The introspective assessment is what is most persuasive to me here because
(1) it seems like we need some reason for what makes the marginal unit of value (e.g. happy lives, days of happiness, etc.) provide less and less utility marginal unit of utility less and less valuable, independent of the fact that it lets us out of pascalianism and is logically necessary to have a bound somewhere.
(2) bounded utility functions can also lead to counterintuitive conclusions including violating ex ante pareto (Kosonen, 2022, Ch. 1) and falling prey to an Egyptology objection (Wilkinson, 2020, section 6; the post, “How do bounded utility functions work if you are uncertain how close to the bound your utility is?”). But the Egyptology objection may be less significant in practical cases where we are adding value at the margin and can see that we are getting less and less utility out of it, rather than the bound being something we have to think about in advance because we are considering some large amount of value which may hit the bound in one leap (but maybe this isn’t so crazy when thinking about AI). And also I guess money pumping is worse than these other conclusions.
(3) bounded utility functions do not seem necessary to avoid pascalianism nor the most obvious option. Someone could easily have an unbounded utility function with regards to sure bets but reject pascalian bets due to probability discounting (or to prevent exploitation in the literal mugging scenario as you mention). But other people bring this up often as a natural response to pascalianism, so I may be missing a reason why you would not want to e.g. value a sure chance of saving 1 billion lives 1,000 times more than saving 1,000,000 lives for sure, but not value a 0.000001 chance at saving 1 billion lives ~at all.
(4) your reasoning makes sense that things cannot get better and better without bound. For a given individual over finite time, it seems like there will be a point where you are just experiencing pleasure all the time / have all your preferences satisfied / have everything on your objective list checked off, and then if you increase utility via time or population, you run into the thing your prior post was about. But if we endorse hedonic utilitarianism, I wonder if this intuition of mine is just reifying the hedonic treadmill and neglecting ways utility may be unbounded, particularly in the negative direction.
I may have indeed made a mistake to frontload the math and thought experiments and put the introspection at the end, rather than centering the introspection and putting the rest in an appendix.
that’s not how utility works, utility is the unit of value, and so it doesn’t make sense in my ontology to say that they diminish in value.
I don’t think I’m anywhere near negative utilitarian enough to empathize with that last point. As I mention in my previous post, I’m quite positive utilitarian.
I don’t really have time to digest 2&3 right now, and I find myself confused without reading up on the things you cite.
Oops, I was sleepy when I wrote this and used sloppy wording. Meant to say “what makes the marginal unit of value (e.g. happy lives, days of happiness, etc.) provide less and less utility.”
I think the last point can also apply in the positive direction or at least does not require weighting negative value more heavily.
The introspective assessment is what is most persuasive to me here because
(1) it seems like we need some reason for what makes the marginal unit of value (e.g. happy lives, days of happiness, etc.) provide less and less utility
marginal unit of utility less and less valuable, independent of the fact that it lets us out of pascalianism and is logically necessary to have a bound somewhere.(2) bounded utility functions can also lead to counterintuitive conclusions including violating ex ante pareto (Kosonen, 2022, Ch. 1) and falling prey to an Egyptology objection (Wilkinson, 2020, section 6; the post, “How do bounded utility functions work if you are uncertain how close to the bound your utility is?”). But the Egyptology objection may be less significant in practical cases where we are adding value at the margin and can see that we are getting less and less utility out of it, rather than the bound being something we have to think about in advance because we are considering some large amount of value which may hit the bound in one leap (but maybe this isn’t so crazy when thinking about AI). And also I guess money pumping is worse than these other conclusions.
(3) bounded utility functions do not seem necessary to avoid pascalianism nor the most obvious option. Someone could easily have an unbounded utility function with regards to sure bets but reject pascalian bets due to probability discounting (or to prevent exploitation in the literal mugging scenario as you mention). But other people bring this up often as a natural response to pascalianism, so I may be missing a reason why you would not want to e.g. value a sure chance of saving 1 billion lives 1,000 times more than saving 1,000,000 lives for sure, but not value a 0.000001 chance at saving 1 billion lives ~at all.
(4) your reasoning makes sense that things cannot get better and better without bound. For a given individual over finite time, it seems like there will be a point where you are just experiencing pleasure all the time / have all your preferences satisfied / have everything on your objective list checked off, and then if you increase utility via time or population, you run into the thing your prior post was about. But if we endorse hedonic utilitarianism, I wonder if this intuition of mine is just reifying the hedonic treadmill and neglecting ways utility may be unbounded, particularly in the negative direction.
I may have indeed made a mistake to frontload the math and thought experiments and put the introspection at the end, rather than centering the introspection and putting the rest in an appendix.
that’s not how utility works, utility is the unit of value, and so it doesn’t make sense in my ontology to say that they diminish in value.
I don’t think I’m anywhere near negative utilitarian enough to empathize with that last point. As I mention in my previous post, I’m quite positive utilitarian.
I don’t really have time to digest 2&3 right now, and I find myself confused without reading up on the things you cite.
Oops, I was sleepy when I wrote this and used sloppy wording. Meant to say “what makes the marginal unit of value (e.g. happy lives, days of happiness, etc.) provide less and less utility.”
I think the last point can also apply in the positive direction or at least does not require weighting negative value more heavily.