Not just egoism, selfish egoism. Every utility function people choose is a selfish one or they wouldn’t choose it. The claim isn’t, “selfish egoism is a subset of utilitarianism” but “selfish egoism is identically the same as utilitarianism.”
This argues that utilitarianism is selfish egoism, but not the contrary? My reading of your position is that someone who had a utility function not dependent on the wellbeing of any other beings would be a selfish egoist, but it’s difficult for me to understand how that could be utilitarian.
How do you determine which beings ought to be in a utilitarian’s utility function? I think it’s generally the utilitarian decides for themselves and the rest of society beats them over the head until the utilitarian includes them too.
Then I agree that would probably recieve downvotes if understood as such, though I’m not sure it would be. I still think there’s something I’m failing to understand; would you extend your claim to a perfectly logical being? I other words, do you think that this is just a property of humans, or of any kind of utilitarianism which assigns positive utility to positive mental states?
I don’t understand what you don’t understand. I heard a remark once about a philosopher who really tried to steelman other people’s arguments, but so that they made sense according to the philosopher, not in the mental frame of the other person. It led to some pretty wacky arguments on the steelman side. I think here, you should assume when I say, “mathematically equivalent,” that’s what I mean. Like, any math you use in utilitarianism is the same as that of selfish egoism. Or, if you tried to put the two philosophies in mathematical terms, you get the exact same equations. So, it extends to logical beings or irrational beings. The words “selfish egoism” and “utilitarianism” are synonyms.
Then I think I’d agree it’s controversial and it’d be downvoted if people realized that was what you meant. I don’t really understand why you think that, in that I could imagine a ‘selfless utility maximizer’ for which the utility it assigned to its own mental state valence was negated… unless you consider the valence to be its utility function—in which case it wouldn’t be controversial at all. This would actually be something like my preferred form of utilitarianism, however it would definitely involve caring about things other than oneself. If you wanted to derive that care for other things from selfish utility maximization alone, you would need to employ a decision theory, would you not? I get the impression I am still missing something.
Perhaps here is where the controversy comes in. The utilitarian comes along and says, “I want to maximize utility!” And everyone thinks, “great! she wants to help everyone out!” The selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?”
I think, also, there is a sense in which utilitarians work to maximize the same utility function. This is also true for selfish egoists, but they’re both better and worse at negotiating (they are more prone to negotiate, but utilitarians make mistakes that are biased towards reaching a consensus just because they solve the problem from different directions).
“he selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?” I can certainly imagine a selfish(under my definition) superintelligence which does want to murder everyone to… turn them into paperclips, for example. The fact that its utility function doesn’t have additional terms for (valuing the conscious experience of) other entities is what makes it so dangerous. Am I correct to state that this is not what you mean when you say ‘selfish’?
“I think, also, there is a sense in which utilitarians work to maximize the same utility function.”
Could you explain this? I could certainly imagine utilitarians converging on the same behaviour, but that seems different, even at a mathematical level, from actually being maximizers of one anothers’ utility functions.
Then wouldn’t it just depend on the utility function?
*checks Wikipedia’s definition of egoism to make sure I know what I’m talking about*
*finds that it can either be defined as the statement that one should , or that one will tend to, pursue one’s own self interest*
Which of these are you referring to as egoism?
If it is the first, then it seems uncontroversial to claim that this is utilitarainism with a utility function centred on your own mental states.
Not just egoism, selfish egoism. Every utility function people choose is a selfish one or they wouldn’t choose it. The claim isn’t, “selfish egoism is a subset of utilitarianism” but “selfish egoism is identically the same as utilitarianism.”
This argues that utilitarianism is selfish egoism, but not the contrary? My reading of your position is that someone who had a utility function not dependent on the wellbeing of any other beings would be a selfish egoist, but it’s difficult for me to understand how that could be utilitarian.
How do you determine which beings ought to be in a utilitarian’s utility function? I think it’s generally the utilitarian decides for themselves and the rest of society beats them over the head until the utilitarian includes them too.
Then I agree that would probably recieve downvotes if understood as such, though I’m not sure it would be. I still think there’s something I’m failing to understand; would you extend your claim to a perfectly logical being? I other words, do you think that this is just a property of humans, or of any kind of utilitarianism which assigns positive utility to positive mental states?
I don’t understand what you don’t understand. I heard a remark once about a philosopher who really tried to steelman other people’s arguments, but so that they made sense according to the philosopher, not in the mental frame of the other person. It led to some pretty wacky arguments on the steelman side. I think here, you should assume when I say, “mathematically equivalent,” that’s what I mean. Like, any math you use in utilitarianism is the same as that of selfish egoism. Or, if you tried to put the two philosophies in mathematical terms, you get the exact same equations. So, it extends to logical beings or irrational beings. The words “selfish egoism” and “utilitarianism” are synonyms.
“Like, any math you use in utilitarianism is the same as that of selfish egoism.” With no constraints on the utility function?
Yes.
Then I think I’d agree it’s controversial and it’d be downvoted if people realized that was what you meant. I don’t really understand why you think that, in that I could imagine a ‘selfless utility maximizer’ for which the utility it assigned to its own mental state valence was negated… unless you consider the valence to be its utility function—in which case it wouldn’t be controversial at all. This would actually be something like my preferred form of utilitarianism, however it would definitely involve caring about things other than oneself. If you wanted to derive that care for other things from selfish utility maximization alone, you would need to employ a decision theory, would you not? I get the impression I am still missing something.
Perhaps here is where the controversy comes in. The utilitarian comes along and says, “I want to maximize utility!” And everyone thinks, “great! she wants to help everyone out!” The selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?”
I think, also, there is a sense in which utilitarians work to maximize the same utility function. This is also true for selfish egoists, but they’re both better and worse at negotiating (they are more prone to negotiate, but utilitarians make mistakes that are biased towards reaching a consensus just because they solve the problem from different directions).
“he selfish egoist comes along and says, “I am just going to fulfill whatever selfish desires I have!” And everyone thinks, “wow, that’s scary! what stops you from murdering people?” I can certainly imagine a selfish(under my definition) superintelligence which does want to murder everyone to… turn them into paperclips, for example. The fact that its utility function doesn’t have additional terms for (valuing the conscious experience of) other entities is what makes it so dangerous. Am I correct to state that this is not what you mean when you say ‘selfish’?
“I think, also, there is a sense in which utilitarians work to maximize the same utility function.”
Could you explain this? I could certainly imagine utilitarians converging on the same behaviour, but that seems different, even at a mathematical level, from actually being maximizers of one anothers’ utility functions.
Sorry, I don’t really want to make this a long thing. I have written a little on this elsewhere (1, 2, 3).