Then I think I’d agree it’s controversial and it’d be downvoted if people realized that was what you meant. I don’t really understand why you think that, in that I could imagine a ‘selfless utility maximizer’ for which the utility it assigned to its own mental state valence was negated… unless you consider the valence to be its utility function—in which case it wouldn’t be controversial at all. This would actually be something like my preferred form of utilitarianism, however it would definitely involve caring about things other than oneself. If you wanted to derive that care for other things from selfish utility maximization alone, you would need to employ a decision theory, would you not? I get the impression I am still missing something.
Perhaps here is where the controversy comes in. The utilitarian comes along and says, “I want to maximize utility!” And everyone thinks, “great! she wants to help everyone out!” The selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?”
I think, also, there is a sense in which utilitarians work to maximize the same utility function. This is also true for selfish egoists, but they’re both better and worse at negotiating (they are more prone to negotiate, but utilitarians make mistakes that are biased towards reaching a consensus just because they solve the problem from different directions).
“he selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?” I can certainly imagine a selfish(under my definition) superintelligence which does want to murder everyone to… turn them into paperclips, for example. The fact that its utility function doesn’t have additional terms for (valuing the conscious experience of) other entities is what makes it so dangerous. Am I correct to state that this is not what you mean when you say ‘selfish’?
“I think, also, there is a sense in which utilitarians work to maximize the same utility function.”
Could you explain this? I could certainly imagine utilitarians converging on the same behaviour, but that seems different, even at a mathematical level, from actually being maximizers of one anothers’ utility functions.
Yes.
Then I think I’d agree it’s controversial and it’d be downvoted if people realized that was what you meant. I don’t really understand why you think that, in that I could imagine a ‘selfless utility maximizer’ for which the utility it assigned to its own mental state valence was negated… unless you consider the valence to be its utility function—in which case it wouldn’t be controversial at all. This would actually be something like my preferred form of utilitarianism, however it would definitely involve caring about things other than oneself. If you wanted to derive that care for other things from selfish utility maximization alone, you would need to employ a decision theory, would you not? I get the impression I am still missing something.
Perhaps here is where the controversy comes in. The utilitarian comes along and says, “I want to maximize utility!” And everyone thinks, “great! she wants to help everyone out!” The selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?”
I think, also, there is a sense in which utilitarians work to maximize the same utility function. This is also true for selfish egoists, but they’re both better and worse at negotiating (they are more prone to negotiate, but utilitarians make mistakes that are biased towards reaching a consensus just because they solve the problem from different directions).
“he selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?” I can certainly imagine a selfish(under my definition) superintelligence which does want to murder everyone to… turn them into paperclips, for example. The fact that its utility function doesn’t have additional terms for (valuing the conscious experience of) other entities is what makes it so dangerous. Am I correct to state that this is not what you mean when you say ‘selfish’?
“I think, also, there is a sense in which utilitarians work to maximize the same utility function.”
Could you explain this? I could certainly imagine utilitarians converging on the same behaviour, but that seems different, even at a mathematical level, from actually being maximizers of one anothers’ utility functions.
Sorry, I don’t really want to make this a long thing. I have written a little on this elsewhere (1, 2, 3).