I was very surprised to find that a supporter of the Complexity of Value hypothesis and the author who warns against simple utility functions advocates torture using simple pseudo-scientific utility calculus.
My utility function has constraints that prevent me from doing awful things to people, unless it would prevent equally awful things done to other people. That this is a widely shared moral intuition is demonstrated by the reaction in the comments section. Since you recognize the complexity of human value, my widely-shared preferences are presumably valid.
In fact, the mental discomfort caused by people who heard of the torture would swamp the disutility from the dust specks. Which brings us to an interesting question—is morality carried by events or by information about events? If nobody else knew of my choice, would that make it better?
For a utilitarian, the answer is clearly that the information about morally significant events is what matters. I imagine so-called friendly AI bots built on utilitarian principles doing lots of awful things in secret to achieve its ends.
Also, I’m interested to hear how many torturers would change their mind if we kill the guy instead of just torturing him. How far does your “utility is all that matters” philosophy go?
First, I don’t buy the process of summing utilons across people as a valid one. Lots of philosophers have objected to it. This is a bullet-biting club, and I get that. I’m just not biting those bullets. I don’t think 400 years of criticism of Utilitarianism can be solved by biting all the bullets. And in Eliezer’s recent writings, it appears he is beginning to understand this. Which is great. It is reducing the odds he becomes a moral monster.
Second, I value things other than maximizing utilons. I got the impression that Eliezer/Less Wrong agreed with me on that from the Complex Values post and posts about the evils of paperclip maximizers. So great evils are qualitatively different to me from small evils, even small evils done to a great number of people!
I get what you’re trying to do here. You’re trying to demonstrate that ordinary people are innumerate, and you all are getting a utility spike from imagining you’re more rational than them by choosing the “right” (naive hyper-rational utilitarian-algebraist) answer. But I don’t think it’s that simple when we’re talking about morality. If it were, the philosophical project that’s lasted 2500 years would finally be over!