I must ask—what is the purpose of ‘overcoming bias’ now that ‘less wrong’ is launched? Why post this here instead of there?
Thom_Blake
retired urologist,
There’s a distinction to be made between altruism (ethical theory) and altruism (social science). The sense of altruism you use seems more to agree with the former. It seems like Eliezer prefers the latter. To summarize:
Altruism (ethical theory) is just like utilitarianism, except that good for oneself is entirely discounted.
Altruism (social sciences) is a ‘selfless concern for others’, in which one helps other people without conscious concern for one’s personal interests (at least some of the time). It does not require that one abandon one’s own interests in the pursuit of helping others all of the time.
Note that the latter is merely descriptive of behavior. Thus Eliezer can say “I behave altruistically” and “I am a utilitarian” (probably not direct quotes) simultaneously without pain of contradiction.
It’s getting to the point where ethicists have to define ‘ethical x’ for all ‘x’ to distinguish it from its use in other fields.
Eliezer,
I prefer this style. It’s a much more interesting and entertaining read. It has a ‘wisdom of the ancients’ feel which, while obviously meant to be ironic, has (I think) a greater chance of being remembered in 1000 years.
I’m not sure about the judges example. There have been certain judges who’ve taken sides on high-profile issues (like abortion or gay marriage) and consequently their reputation turned to mud amongst those on the other side of the issue.
EY, but you are a moral realist (or at least a moral objectivist, which ought to refer to the same thing). There’s a fact about what’s right, just like there’s a fact about what’s prime or what’s baby-eating. It’s a fact about the universe, independent of what anyone has to say about it. If we were human’ we’d be moral’ realists talking about what’s right’. ne?
Anonymous, that sound you hear is probably people rushing to subscribe. http://www.rifters.com/crawl/?p=266 - note the comments.
Sebastian,
Here there is an ambiguity between ‘bias’ and ‘value’ that is probably not going to go away. EY seems to think that bias should be eliminated but values should be kept. That might be most of the distinction between the two.
Nick,
There is a tendency for some folks to distinguish between descriptive and normative statements, in the sense of ‘one cannot derive an ought from an is’ and whatnot. A lot of this comes from hearing about the “naturalistic fallacy” and believing this to mean that naturalism in ethics is dead. Naturalists in turn refer to this line of thinking as the “naturalistic fallacy fallacy”, as the strong version of the naturalistic fallacy does not imply that naturalism in ethics is wrong.
As for the fallacy you mention, I disagree that it’s a fallacy. It makes more sense to me to take “I value x” and “I act as though I value x” to be equivalent when one is being honest, and to take both of those as different from (an objective statement of) “x is good for me”. This analysis of course only counts if one believes in akrasia—I’m really still on the fence on that one, though I lean heavily towards Aristotle.
Manon, thanks for pointing that out—I’d left that out of my analysis entirely. I too would like untranslatable 2. It doesn’t change my answer though, as it turns out.
Nick,
Behavior isn’t an argument (except when it is), but it is evidence. And it’s akrasia when you say, “Man, I really think spending this money on saving lives is the right thing to do, but I just can’t stop buying ice cream”—not when you say “buying ice cream is the right thing to do”. Even if you are correct in your disagreement with Simon about the value of ice cream, that would be a case of Simon being mistaken about the good, not a case of Simon suffering from akrasia. And I think it’s pretty clear from context that Simon believes he values ice cream more.
And it sounds like that first statement is an attempt to invoke the naturalistic fallacy fallacy. Was that it?
I prefer the ending where we ally ourselves with the babyeaters to destroy the superhappies. We realize that we have more in common with the babyeaters, since they have notions of honor and justified suffering and whatnot, and encourage the babyeaters to regard the superhappies as flawed. The babyeaters will gladly sacrifice themselves blowing up entire star systems controlled by the superhappies to wipe them out of existence due to their inherently flawed nature. Then we slap all of the human bleeding-hearts that worry about babyeater children, we come up with a nicer name for the babyeaters, and they (hopefully) learn to live with the fact that we’re a valuable ally that prefers not to eat babies but could probably be persuaded given time.
P.S. anyone else find it ironic that this blog has measures in place to prevent robots from posting comments?
Julian,
And possibly billions of Huygens humans. Don’t forget those.
Humanity could always offer to sacrifice itself. Compare the world where humanity compromises with both the Babyeaters and the Super Happy, versus one where we convince them to not compromise and instead make everybody Super Happy.
Of course, I’m just guessing, since I’m not a Utilitarian.
Rudd-O,
That’s not the idea I’m getting at all (free retaliation, etc). It seems more to me that these people can’t imagine intentionally hurting or being distrustful of each other, and so when they say ‘rape’, think ‘tickle fight’.
spriteless,
That’s what I was thinking. Perhaps the newcomer engineered this meetup somehow to see whether the two species are safe to contact.
This makes eudaimonist egoism seem simpler, more elegant by comparison. I don’t need a stream of victims now, and I won’t need them post-Singularity.
Doug S,
Indeed. The AI wasn’t paying attention if he thought bringing me to this place was going to make me happier. My stuff is part of who I am; without my stuff he’s quite nearly killed me. Even moreso when ‘stuff’ includes wife and friends.
But then, he was raised by one person so there’s no reason to think he wouldn’t believe in wrong metaphysics of self.
James,
I wonder the same thing. Given that reality is allowed to kill us, it seems that this particular dystopia might be close enough to good. How close to death do you need to be before unleashing the possibly-flawed genie?
Eliezer,
I must once again express my sadness that you are devoting your life to the Singularity instead of writing fiction. I’ll cast my vote towards the earlier suggestion that perhaps fiction is a good way of reaching people and so maybe you can serve both ends simultaneously.
For the record, Thom_Blake is thomblake.