What about a purely hedonistic utility function, i.e. a utility function that seeks to maximize the amount of subjective pleasure experienced by (if you’re selfish) yourself or (if you’re altruistic) everyone? This of course allows for utility monsters, but I’m not convinced that that’s necessarily a bug as opposed to a feature, and it’s the only solution I see to the “human values are ultimately arbitrary” problem.
I am aware of this, and as my previous comment implies, I’m not convinced it’s actually all that repugnant.
Also, the way you describe it, it would lead to wireheading everyone.
Possibly, yes. Presumably, if you don’t value being wireheaded, you wouldn’t like that (or more accurately, you would like that, but you wouldn’t want that). But that just leads back to the question of what to value, which, as the article points out, is not at all straightforward to settle.
What about a purely hedonistic utility function, i.e. a utility function that seeks to maximize the amount of subjective pleasure experienced by (if you’re selfish) yourself or (if you’re altruistic) everyone? This of course allows for utility monsters, but I’m not convinced that that’s necessarily a bug as opposed to a feature, and it’s the only solution I see to the “human values are ultimately arbitrary” problem.
That leads to the repugnant conclusion.
Also, the way you describe it, it would lead to wireheading everyone.
I am aware of this, and as my previous comment implies, I’m not convinced it’s actually all that repugnant.
Possibly, yes. Presumably, if you don’t value being wireheaded, you wouldn’t like that (or more accurately, you would like that, but you wouldn’t want that). But that just leads back to the question of what to value, which, as the article points out, is not at all straightforward to settle.