‘Altruism’ for me doesn’t mean ‘I assign infinite value to my own happiness (and freedom, beauty, etc.) and 0 to others’, but everyone would be better off (myself included) if I sacrificed my own happiness for others’. So I’ll sacrifice my own happiness for others’.′ Rather, I assign some value to my own happiness, but a lot more value to others’ happiness. I care unconditionally about others’ happiness.
Since it’s only a Prisoner’s Dilemma if I value ‘I defect, you cooperate’ over ‘we both cooperate’, for me high-stakes ‘defecting’ would mean directly indulging in my desire to help others, while ‘cooperating’ via UDT would mean sacrificing humanity’s welfare in some small way in order to keep a non-utilitarian agent from doing even more to reduce humanity’s welfare. The structure of the PD has nothing to do with whether the agents are selfish vs. altruistic (as long as you take that into account when initially calculating payoffs).
Thought experiments like Singer’s are how I found out that I do in fact terminally value people who are distant from me in space (and time). My behavior isn’t perfectly utilitarian, but I’d take a pill to become more so, so my revealed preferences aren’t what I’d prefer them to be.
‘Altruism’ for me doesn’t mean ‘I assign infinite value to my own happiness (and freedom, beauty, etc.) and 0 to others’, but everyone would be better off (myself included) if I sacrificed my own happiness for others’. So I’ll sacrifice my own happiness for others’.′ Rather, I assign some value to my own happiness, but a lot more value to others’ happiness. I care unconditionally about others’ happiness.
Since it’s only a Prisoner’s Dilemma if I value ‘I defect, you cooperate’ over ‘we both cooperate’, for me high-stakes ‘defecting’ would mean directly indulging in my desire to help others, while ‘cooperating’ via UDT would mean sacrificing humanity’s welfare in some small way in order to keep a non-utilitarian agent from doing even more to reduce humanity’s welfare. The structure of the PD has nothing to do with whether the agents are selfish vs. altruistic (as long as you take that into account when initially calculating payoffs).
Thought experiments like Singer’s are how I found out that I do in fact terminally value people who are distant from me in space (and time). My behavior isn’t perfectly utilitarian, but I’d take a pill to become more so, so my revealed preferences aren’t what I’d prefer them to be.