Saving this exchange between Tyler Cowen and Peter Singer for my own future reference:
COWEN: Well, take the Bernard Williams question, which I think you’ve written about. Let’s say that aliens are coming to Earth, and they may do away with us, and we may have reason to believe they could be happier here on Earth than what we can do with Earth. I don’t think I know any utilitarians who would sign up to fight with the aliens, no matter what their moral theory would be.
SINGER: Okay, you’ve just met one.
COWEN: I’ve just met one. So, you would sign up to fight with the aliens?
SINGER: If the hypothesis is like that, that the aliens are wiser than we are, they know how to make the world a better place for everyone, they’re giving full weight to human interests, but they say, “Even though we’re giving full weight to human interests, not discounting your interests because you’re not a member of our species, as you do with animals, but unfortunately, it just works out that to produce a better world, you have to go,” I’ll say, “Okay, if your calculations are right, if that’s all right, I’m on your side.”
COWEN: You’re making them a little nicer. You’re calling them wise. They may or may not be wise. They’re just happier than we are. They have less stress, less depression. If they could rule over Earth, they would do a better go of it than we would. I would still side with the humans.
SINGER: I would not. What you’ve shown now is that their interest happens to coincide with the universal good. That’s the way to produce more happiness, full stop, not just more happiness for them. And if that’s the case, I’m on their side.
COWEN: How do we know there is a universal good? You’re selling out your fellow humans based on this belief in a universal good, which is quite abstract, right? The other smart humans you know mostly don’t agree with you, I think, I hope.
SINGER: But you’re using the kind of language that Bernard Williams used when he says, “Whose side are you on?” You said, “You’re selling out your fellow humans,” as if I owe loyalty to members of my species above loyalty to good in general, that is, to maximizing happiness and well-being for all of those affected by it. I don’t claim to have any particular loyalty for my species rather than the general good.
COWEN: If there’s not this common metric between us and the aliens, but you just measure — you hook people up to a scale, you measure. They have more of it than we do. Let them come in. If that doesn’t exist, what is the common good or universal good in this setting?
SINGER: I don’t know if that doesn’t exist, but you said they’re happier than we are, which suggests that there is a common metric of happiness, and that was the basis on which I answered your question. If there’s no common metric, I don’t really have an answer, or I would try to use the metric of overall happiness. I’m not sure why I wouldn’t be able to use that, but if we assume that I couldn’t, then I would just not know what to do.
COWEN: So you wouldn’t fight for our side. Even then, you’d throw up your hands or just not be sure what to do.
SINGER: No, this is not about a football team. You can give your loyalty to a football team and support them, even though you don’t really think that they’re somehow more morally worthy of winning than their opponents. But this is not a game like this. There’s everything at stake.
COWEN: To what extent for you is utilitarianism not only a good theory of outcomes but also a theory of obligation? I’m sure you know the Donald Regan literature, this “Oh, you prefer the outcome with more utility,” but “What should I do?” can still be a complex question.
SINGER: Well, it can be a complex question in the sense that it may be that we don’t want to directly aim at utility because we’re likely to get things wrong. If we can’t be confident in our calculations that we are doing the right thing, then I think the obligations that we have are to maximize utility. But it’s been argued that we’re more likely to make mistakes if we do that, and rather that our obligation should be to conform to certain principles or rules. I think that depends on how confident you are in your ability.
I certainly think we should follow rules of thumb sometimes, when we can’t be sure of what’s the right outcome, and we should do what generally is accepted. You go back to Sam Bankman-Fried. Obviously, I think that was his mistake. He was too confident that he could get things right and fix things and didn’t follow basic rules, or at least it’s alleged that he didn’t follow basic rules, like “Don’t steal your clients’ money.”
COWEN: Isn’t there a dilemma above and beyond the epistemic dilemma? Say, you, Peter Singer, you’re programming a driverless car and you’re in charge. Ideally, you would like to program the car to be a utilitarian and Benthamite car, that if it has to swerve, it would sooner kill one older person than two younger people, and so on.
Let’s say you also knew that if you programmed the driverless car to be Benthamite, basically, the law would shut it down, public opinion would rebel, you’d get in trouble, the automaker would get in trouble. How then would you program the car?
SINGER: Yes, I would program it to produce the best consequences that would not be prohibited by the government or the manufacturer. I’m all in favor of making compromises if you have to, to produce the most good that you possibly can in the circumstances in which you are.
COWEN: Doesn’t that then mean individuals should hold onto some moral theory that may be quite far from utilitarianism? It’s not just a compromise. You need to be very intuition driven, nonutilitarian just to get people to trust you, to work with you, to cooperate. In that sense, at the obligation level, you’re not so utilitarian at all.
SINGER: You may be. That will depend on your own nature, as to whether you think you’re going to be led astray if you’re not intuition driven. Or you may think that you can be self-aware about the risks that you’re going to go wrong. You’re not exactly intuition driven, but you’re driven by the thought that “I could be mistaken here, and it’s probably going to have more value if I don’t just directly think about how to produce the most utility.”
when we select an action in these thought experiments, we’re also implicitly selecting a policy for selecting actions.
a world where, when two people meet, the “less happy” one signs all their property over to the “more happy” one and then dies is… just not that much fun. sort of lonely. uncaring. not my values.
if the aliens are the sort who expect this of me, then i will fight them tooth and nail, as their happiness is not a happiness i can care about. this is regardless of how much they might—on a sort of “object level”—thrive.
i don’t think Cowen and Singer disagree about this. rather it seems that Singer holds that all of this (the ground-level notion of thriving, plus the policy decisions/path dependence) can be recovered from the utility function + thinking about it. so when the question is posed “would you even go so far as to support your own demise if [the utility function would improve]?” what’s heard is “would you even go so far [...] in order to make the universe better?” to which the answer is—morally speaking, at least—obvious.
on the other hand, Cowen thinks of a utility function as merely an ordering over world-snapshots, without reference to the history of how they got there. so the question asked is implicitly “would you support a dreadful policy that increases suffering, just to hear a bit more laughter?”. again, the answer is obvious.
Saving this exchange between Tyler Cowen and Peter Singer for my own future reference:
when we select an action in these thought experiments, we’re also implicitly selecting a policy for selecting actions.
a world where, when two people meet, the “less happy” one signs all their property over to the “more happy” one and then dies is… just not that much fun. sort of lonely. uncaring. not my values.
if the aliens are the sort who expect this of me, then i will fight them tooth and nail, as their happiness is not a happiness i can care about. this is regardless of how much they might—on a sort of “object level”—thrive.
i don’t think Cowen and Singer disagree about this. rather it seems that Singer holds that all of this (the ground-level notion of thriving, plus the policy decisions/path dependence) can be recovered from the utility function + thinking about it. so when the question is posed “would you even go so far as to support your own demise if [the utility function would improve]?” what’s heard is “would you even go so far [...] in order to make the universe better?” to which the answer is—morally speaking, at least—obvious.
on the other hand, Cowen thinks of a utility function as merely an ordering over world-snapshots, without reference to the history of how they got there. so the question asked is implicitly “would you support a dreadful policy that increases suffering, just to hear a bit more laughter?”. again, the answer is obvious.