Say that the universe has Clippy the paperclip maximiser + 9 of his friends and Roger the ruberband maximiser with also 9 friends. Say also that the world is metal rich which makes paperclips easier. You have the options to:
Increase the production efficiency of paperclips
increase the production efficiency of rubberbands
Decrease the number of rubberband maximisers by 1
Increase the number of paperclip maximisers by 1
For plus+minus to win out you have to show that an individual would be “better off” converting than increasing efficiency. The upper two options raise the utility value of the world upwards within a single utility function/evaluator. The conversion must somehow make a utility conversion mapping. While I have assumed that paperclipping is easier I have not assumed paperclipping is more moral than rubberbanding. Yet the recommendation seems to be to either work for the paperclippers or try to convert everyone. The rubberbanders got utility-monstered. It’s also dubious that converting people will only select the political direction of the world and doesn’t impact the ability to purse that direction.
Thus suprisingly treating all values identically ended up favouring one of them over all others. You could have thought that the values would have a similar distribution as in the beginning. It also seems that a person that “wants to do most good for the world possible” is rather doing the thing that creates the world that owes it’s existence to the benefactor the most. Thus easily accomplished values will have priority. This deviates from my understanding what it is to do good.
I think the ability to judge the values of others should not be hidden in an implict assumption that all values are equally duty generating. But being insensitive or overtly harsh seems also problematic. It should be recognised as a problem of choice rather than have theories makes such choices for us in an accidental manner.
Thus surprisingly treating all values identically ended up favoring one of them over all others.
Interesting, this casts some light on the repugnant conclusion for me. A naive utilitarianism will favor creating lots of minds that have easily satisfied preferences, so that more of them can be created given a resource constraint. We can improve on this by noting that we value complex minds enjoying complex things. If a complex mind has more worth, then how do I evaluate a dyson sphere sized brain relative to my own utility?
We know that the kind of mind we value to have is complex which is a different thing than valuing it because it is complex. It doesn’t strike me as intuitive that I would value a person that is maximally twisted up.
When i check my intuitions I seem to value simple minds less, and more complex minds more, robustly across the range of complexity in minds we observe. It does feel weird to try to imagine stretching this scale to include things more complex than me, but it feels weirder to make current humans the cutoff if that makes sense.
When I check for which minds I seem to appreciate among the minds we observe it seems those minds that have larger surface area are worth more. Extrapolating this is weird and it is unlikely that the human mind is the apex of surface area possible. But I am pretty sure that having a larger surface area would not be sufficient to make me care more. However it seems it would be more probable / there would be more resources to have something worthwhile with it, provided that it is not “wasted”. I don’t have a clear handle on what the “good” produced is but just having several acres of neurotissue around is not the finished stage.
Say that the universe has Clippy the paperclip maximiser + 9 of his friends and Roger the ruberband maximiser with also 9 friends. Say also that the world is metal rich which makes paperclips easier. You have the options to:
Increase the production efficiency of paperclips increase the production efficiency of rubberbands Decrease the number of rubberband maximisers by 1 Increase the number of paperclip maximisers by 1
For plus+minus to win out you have to show that an individual would be “better off” converting than increasing efficiency. The upper two options raise the utility value of the world upwards within a single utility function/evaluator. The conversion must somehow make a utility conversion mapping. While I have assumed that paperclipping is easier I have not assumed paperclipping is more moral than rubberbanding. Yet the recommendation seems to be to either work for the paperclippers or try to convert everyone. The rubberbanders got utility-monstered. It’s also dubious that converting people will only select the political direction of the world and doesn’t impact the ability to purse that direction.
Thus suprisingly treating all values identically ended up favouring one of them over all others. You could have thought that the values would have a similar distribution as in the beginning. It also seems that a person that “wants to do most good for the world possible” is rather doing the thing that creates the world that owes it’s existence to the benefactor the most. Thus easily accomplished values will have priority. This deviates from my understanding what it is to do good.
I think the ability to judge the values of others should not be hidden in an implict assumption that all values are equally duty generating. But being insensitive or overtly harsh seems also problematic. It should be recognised as a problem of choice rather than have theories makes such choices for us in an accidental manner.
Interesting, this casts some light on the repugnant conclusion for me. A naive utilitarianism will favor creating lots of minds that have easily satisfied preferences, so that more of them can be created given a resource constraint. We can improve on this by noting that we value complex minds enjoying complex things. If a complex mind has more worth, then how do I evaluate a dyson sphere sized brain relative to my own utility?
We know that the kind of mind we value to have is complex which is a different thing than valuing it because it is complex. It doesn’t strike me as intuitive that I would value a person that is maximally twisted up.
When i check my intuitions I seem to value simple minds less, and more complex minds more, robustly across the range of complexity in minds we observe. It does feel weird to try to imagine stretching this scale to include things more complex than me, but it feels weirder to make current humans the cutoff if that makes sense.
When I check for which minds I seem to appreciate among the minds we observe it seems those minds that have larger surface area are worth more. Extrapolating this is weird and it is unlikely that the human mind is the apex of surface area possible. But I am pretty sure that having a larger surface area would not be sufficient to make me care more. However it seems it would be more probable / there would be more resources to have something worthwhile with it, provided that it is not “wasted”. I don’t have a clear handle on what the “good” produced is but just having several acres of neurotissue around is not the finished stage.