I talked to other people about such calls. They called me evil. Apparently, people don’t see the proposition “Aid is good” as following from “Aid helps people” (a purely factual claim) and “Helping people is good” (which only evil people deny); it’s all in the same mental bucket. So we’re pretty much screwed explaining it. Moreover, even when people finally get the distinction, the claims tend to be rejected at the speed of thought—because we all know “Aid is good”.
I’m somewhat puzzled by how all the influences you quote are fiction. I read and watched fiction as a child, and the only obvious consequence on my personality has been 1) extremely distorted—I can recognize the influence because I remember it, but you couldn’t look at that part of my personality and say “Aha, that came from Disney movies!” 2) tossed out of the window in a recent crisis of faith 3) more influenced by real life than fiction. I’ve been recalculating a lot of things since as young as 4 (most of which ended up wrong because of lack of evidence and a few fundamental mistakes), with a wave of recalculation each time I uncovered a fundamental mistake (happened twice) and many recalculations ended up in a very different place from their starting point, which gives somewhat more credence to the “lovely excuse” when it applies.
What did I pick up from childhood? Altruism? I can’t trace back the causal line, I don’t remember a point at which I wasn’t altruistic in full generality—I do remember stories about “altruism = good” and “ingroup/outgroup dichotomy = bad”, but I already agreed with that. What I remember picking up were social norms of the form “Saying ‘X is Y’ is good”—but unlike other children, I picked up “X is Y”—“Truth is good”, “Death is bad” (didn’t quite believe that one, had to recalculate later), “Love is good” (tossed out of the window when I realized “love” is vague). But I picked up those from social life, not fiction—and I was a stereotypical bookworm. I may have confused “good fiction” and “good life” due to fiction, but real life influences look more like the culprits.
The simplest hypothesis is not “People are embarrassed”. I bet they simply don’t know. Most people are just terrible at introspection, and don’t even think about it.
Also, yes, I’m going to get you started. Incredible disregard for what?
I find this harder to read. The arguments are obscured. The structure sucks; claims are not isolated into neat little paragraphs so that I can stop and think “Is this claim actually true?”. It’s about you (why you aren’t Wise) rather than about the world (how Wisdom works).
I’ve rarely heard “You’ll understand when you’re older” on questions of simple fact. Usually, it’s uttered when someone who claims to be altruistic points out someone else’s actions are harmful. The Old Cynic then tells the Young Idealist “I used to be like you, but then I realized you’ve got to be realistic, you’ll understand when you’re older that you should be more selfish.”. But they never actually offer an object-level argument, or even seem to have changed their minds for rational reasons—it looks like the Selfishness Fairy just changed their terminal values as they grew older. That may be the case; it may also be sour grapes bias: when they realized their altruism could never have as big an effect as it ought to, they decided altruism wasn’t right after all. The best defense I can come up with is: If your moral intuitions change, especially change in a way you’ve previously noticed as “maturing”, only trust them if your justifications for it would convice your past self as their most idealistic.
Is this “stupid teeneager” thing real, or just a stereotype that sells books? I’ve seen teenagers drink and drive; they don’t look like they do it to look adult. I’ve tried some drugs and turned others down, and the only things that (I’m aware) factored were what I could learn from the experience, how pleasant it would be, and the risks. I consciously ignored peer pressure—as for looking mature, I simply didn’t even consider it could be a criterion any more than the parity of my number of nose hair.
Oh, I’m starting to see why the Superhappies are not so right after all, what they lack, why they are alien, in the Normal Ending and in Eliezer’s comments. I think this should have been explained in more detail in the story, because I initially failed to see their offer as anything but good, let alone bad enough to kill yourself. I want untranslatable 2!
Still, if I had been able to decide on behalf of humanity, I would have tried to make a deal—not outright accepted their offer, but negotiated to keep more of what matters to us, maybe by adopting more of their emotions, or asking lesser modifications of them. It just doesn’t look that irreconciliable.
Also, their offer to have the Babyeaters eat nonsentient children sounds stupid—like replacing out friends and lovers with catgirls.
Wait. Aren’t they right? I don’t like that they don’t terminally value sympathy (though they’re pretty close), but that’s beside the point. Why keep the children suffering? If there is a good reason—that humans need a painful childhood to explore, learn and develop properly, for example—shouldn’t the Super Happy be conviced by that? They value other things than a big orgasm—they grow and learn—they even tried to forsake some happiness for more accurate beliefs—if, despite this, they end up preferring stupid happy superbabies to painful growth, it’s likely we agree. I don’t want to just tile the galaxy with happiness counters—but if collapsing into orgasmium means the Supper Happy, sign me up.
Eliezer, why do you hate death so much? I understand why you’d hate it as much as the social norm wants you to say you do, but not so much more. People don’t hate death, and don’t even say they hate death nearly as much as you do. I can’t think of a simpler hypothesis than “Eliezer is a mutant”.
Now, of course, throwing in the long, painful agony of children changes something.
@Jotaf: No, you misunderstood—guess I got double-transparent-deluded. I’m saying this:
Probability is subjectively objective
Probability is about something external and real (called truth)
Therefore you can take a belief and call it “true” or “false” without comparing it to another belief
If you don’t match truth well enough (if your beliefs are too wrong), you die
So if you’re still alive, you’re not too stupid—you were born with a smart prior, so justified in having it
So I’m happy with probability being subjectively objective, and I don’t want to change my beliefs about the lottery. If the paperclipper had stupid beliefs, it would be dead—but it doesn’t, it has evil morals.
Morality is subjectively objective
Morality is about some abstract object, a computation that exists in Formalia but nowhere in the actual universe
Therefore, if you take a morality, you need another morality (possibly the same one) to assess it, rather than a nonmoral object
Even if there was some light in the sky you could test morality against, it wouldn’t kill you for your morality being evil
So I don’t feel on better moral ground than the paperclipper. It has human_evil morals, but I have paperclipper_evil morals—we are exactly equally horrified.
@Eliezer: Can you expand on the “less ashamed of provincial values” part?
@Carl Shuman: I don’t know about him, but for myself, HELL YES I DO. Family—they’re just randomly selected by the birth lottery. Lovers—falling in love is some weird stuff that happens to you regardless of whether you want it, reaching into your brain to change your values: like, dude, ew—I want affection and tenderness and intimacy and most of the old interpersonal fun and much more new interaction, but romantic love can go right out of the window with me. Friends—I do value friendship; I’m confused; maybe I just value having friends, and it’d rock to be close friends with every existing mind; maybe I really value preferring some people to others; but I’m sure about this: I should not, and do not want to, worry more about a friend with the flu than about a stranger with cholera.
@Robin Hanson: HUH? You’d really expect natural selection to come up with minds who enjoy art, mourn dead strangers and prefer a flawed but sentient woman to a perfect catgirl on most planets?
This talk about “‘right’ means right” still makes me damn uneasy. I don’t have more to show for it than “still feels a little forced”—when I visualize a humane mind (say, a human) and a paperclipper (a sentient, moral one) looking at each other in horror and knowing there is no way they could agree about whether using atoms to feed babies or make paperclips, I feel wrong. I think about the paperclipper in exactly the same way it thinks about me! Sure, that’s also what happens when I talk to a creationist, but we’re trying to approximate external truth; and if our priors were too stupid, our genetic line would be extinct (or at least that’s what I think) - but morality doesn’t work like probability, it’s not trying to approximate anything external. So I don’t feel so happier about the moral miracle that made us than about the one that makes the paperclipper.
Oh please. Two random men are more alike than a random man and a random woman, okay, but seriously, a huge difference that makes it necessary to either rewrite minds to be more alike or separate them? First, anyone who prefers to socialize with the opposite gender (ever met a tomboy?) is going to go “Ew!”. Second, I’m pretty sure there are more than two genders (if you want to say genderqueers are lying or mistaken, the burden of proof is on you). Third, neurotypicals can get along with autists just fine (when they, you know, actually try), and this makes the difference between genders look hoo-boy-tiiiiny. Fourth—hey, I like diversity! Not just just knowing there are happy different minds somewhere in the universe—actually interacting with them. I want to sample ramensubspace everyday over a cup of tea. No way I want to make people more alike.
I don’t see how removing getting-used-to is close to removing boredom. IANAneurologist, but on a surface level, they do seem to work differently—boredom is reading the same book everyday and getting tired of it, habituation is getting a new book everyday and not thinking “Yay, new fun” anymore.
I’m reluctant to keep habituation because, at least in some cases, it is evil. When the emotion is appropriate to the event, it’s wrong for it to disminish—you have a duty to rage against the dying of the light. (Of course we need it for survival, we can’t be mourning all the time.) It also looks linked to status quo bias.
Maybe, like boredom, habituation is an incentive to make life better; but it’s certainly not optimal.
I’m going to stick out my neck. Eliezer wants everyone to live. Most people don’t.
People care about their and their loved ones’ immediate survival. They discount heavily for long-term survival. And they don’t give a flying fuck about the life of strangers. They say “Death is bad.”, but the social norm is not “Death is bad.”, it’s “Saying “Death is bad.” is good.”.
If this is not true, then I don’t know how to explain why they dismiss cryonics out of hand with arguments about how death is not that bad that are clearly not their true rejection. The silliness heuristic explains believing it would fail, or that it’s a scam—not rejecting the principle. Status quo and naturalistic bias explain part of the rejection, but surely not the whole thing.
And it would explain why I was bewildered, thinking “Why would you want a sucker like me to live?” even though I know Eliezer truly values life.
Actually, the Mystic Eyes of Depth Perception are pretty underwhelming. You can tell how far away things are with one eye most of the time. The difference is big enough to give a significant advantage, but nothing near superpower level. My own depth perception is crap (better than one eye though), and I don’t often bump into walls.
Nazir Ahmad Bhat, you are missing the point. It’s not a question of identity, like which ice cream flavor you prefer. It’s about truth. I do not believe there is a teapot orbiting around Jupiter, for the various reasons explained on this site (see Absence of evidence is evidence of absence and the posts on Occam’s Razor). You may call this a part of my identity. But I don’t need people to believe in a teapot. Actually, I want everyone to know as much as possible. Promoting false beliefs is harming people, like slashing their tires. You don’t believe in a flying teapot: do you need other people to?
Eliezer, sure, but that can’t be the whole story. I don’t care about some of the stuff most people care about. Other people whose utility functions differ in similar but different ways from the social norm are called “psychopaths”, and most people think they should either adopt their morals or be removed from society. I agree with this.
So why should I make a special exception for myself, just because that’s who I happen to be? I try to behave as if I shared common morals, but it’s just a gross patch. It feels tacked on, and it is.
I expected (though I had no idea how) you’d come up with an argument that would convice me to fully adopt such morals. But what you said would apply to any utility function. If a paperclip maximizer wondered about morality, you could tell it: “‘Good’ means ‘maximizes paperclips’. You can think about it all day long, but you’d just end up making a mistake. Is that worth forsaking the beauty of tiling the universe with paperclips? What do you care there exists in mindspace minds that drag children off train tracks?” and it’d work just as well. Yet if you could, I bet you’d choose to make the paperclip maximizer adopt your morals.
Constant: “Give a person power, and he no longer needs to compromise with others, and so for him the raison d’etre of morality vanishes and he acts as he pleases.”
If you could do so easily and with complete impunity, would you organize fights to death for your pleasure? Would you even want to? Moreover, humans are often tempted to do things they know they shouldn’t, because they also have selfish desires. AIs don’t if you don’t build it into them. If they really do ultimately care about humanity’s well-being, and don’t take any pleasure from making people obey them, they will keep doing so.
I’m confused. I’ll try to rephrase what you said, so that you can tell me whether I understood.
“You can change your morality. In fact, you do it all the time, when you are persuaded by arguments that appeal to other parts of your morality. So you may try to find the morality you really should have. But—“should”? That’s judged by your current morality, which you can’t expect to improve by changing it (you expect a particular change would improve it, but you can’t tell in what direction). Just like you can’t expect to win more by changing your probability estimate to win the lottery.
Moreover, while there is such a fact as “the number on your ticket matches the winning number”, there is no ultimate source of morality out there, no way to judge Morality_5542 without appealing to another morality. So not only you can’t jump to another morality, you also have to reason to want to: you’re not trying to guess some true morality.
Therefore, just keep whatever morality you happen to have, including your intuitions for changing it.”
Did I get this straight? If I did, it sounds a lot like a relativistic “There is no truth, so don’t try to convice me”—but there is indeed no truth, as in, no objective morality.
This argument sounds too good to be true—when you apply it to your own idea of “right”. It also works for, say, a psychopath unable to feel empathy who gets a tremendous kick out of killing. How is there not a problem with that?
No! The problem is not reductionism, or that morality is or isn’t about my brain! The problem is that what morality actually computes is “What should you feel-moral about in order to maximize your genetic fitness in the ancestral environment?”. Unlike math, which is more like “What axioms should you use in order to develop a system that helps you in making a bridge?” or “What axioms should you use in order to get funny results?”. I care about bridges and fun, not genetic fitness.
Actually, “Whatever turns y’all on” is a pretty damn good morality. Because it makes sense on an intuitive level (it looks like what selfishness would be if other people were you). Because it doesn’t care too much where your mind comes from, as it maximizes whatever turns you on. Because it mostly adds up to normality. Possibly because it’s what I used, so I’m biased. Though I don’t think you quite get normality - killing is a minor offense here, because people don’t get to experience it.
Folks, we covered that already! “You should open the door before you walk trough it.” means “Your utility function ranks ‘Open the door then walk through it’ above ‘Walk through the door without opening it’”. YOUR utility function. “You should not murder.” is not just reminding you of your own preferences. It’s more like “(The ‘morality’ term of) my utility function ranks ‘You murder’ below ‘you don’t murder’.”, and most “sane” moralities tend to regard “this morality is universal” as a good thing.