when we select an action in these thought experiments, we’re also implicitly selecting a policy for selecting actions.
a world where, when two people meet, the “less happy” one signs all their property over to the “more happy” one and then dies is… just not that much fun. sort of lonely. uncaring. not my values.
if the aliens are the sort who expect this of me, then i will fight them tooth and nail, as their happiness is not a happiness i can care about. this is regardless of how much they might—on a sort of “object level”—thrive.
i don’t think Cowen and Singer disagree about this. rather it seems that Singer holds that all of this (the ground-level notion of thriving, plus the policy decisions/path dependence) can be recovered from the utility function + thinking about it. so when the question is posed “would you even go so far as to support your own demise if [the utility function would improve]?” what’s heard is “would you even go so far [...] in order to make the universe better?” to which the answer is—morally speaking, at least—obvious.
on the other hand, Cowen thinks of a utility function as merely an ordering over world-snapshots, without reference to the history of how they got there. so the question asked is implicitly “would you support a dreadful policy that increases suffering, just to hear a bit more laughter?”. again, the answer is obvious.
when we select an action in these thought experiments, we’re also implicitly selecting a policy for selecting actions.
a world where, when two people meet, the “less happy” one signs all their property over to the “more happy” one and then dies is… just not that much fun. sort of lonely. uncaring. not my values.
if the aliens are the sort who expect this of me, then i will fight them tooth and nail, as their happiness is not a happiness i can care about. this is regardless of how much they might—on a sort of “object level”—thrive.
i don’t think Cowen and Singer disagree about this. rather it seems that Singer holds that all of this (the ground-level notion of thriving, plus the policy decisions/path dependence) can be recovered from the utility function + thinking about it. so when the question is posed “would you even go so far as to support your own demise if [the utility function would improve]?” what’s heard is “would you even go so far [...] in order to make the universe better?” to which the answer is—morally speaking, at least—obvious.
on the other hand, Cowen thinks of a utility function as merely an ordering over world-snapshots, without reference to the history of how they got there. so the question asked is implicitly “would you support a dreadful policy that increases suffering, just to hear a bit more laughter?”. again, the answer is obvious.