The Sheer Folly of Callow Youth

“There speaks the sheer folly of cal­low youth; the rash­ness of an ig­no­rance so abysmal as to be pos­si­ble only to one of your ephemeral race...”
Ghar­lane of Eddore

Once upon a time, years ago, I pro­pounded a mys­te­ri­ous an­swer to a mys­te­ri­ous ques­tion—as I’ve hinted on sev­eral oc­ca­sions. The mys­te­ri­ous ques­tion to which I pro­pounded a mys­te­ri­ous an­swer was not, how­ever, con­scious­ness—or rather, not only con­scious­ness. No, the more em­bar­rass­ing er­ror was that I took a mys­te­ri­ous view of moral­ity.

I held off on dis­cussing that un­til now, af­ter the se­ries on metaethics, be­cause I wanted it to be clear that Eliezer1997 had got­ten it wrong.

When we last left off, Eliezer1997, not satis­fied with ar­gu­ing in an in­tu­itive sense that su­per­in­tel­li­gence would be moral, was set­ting out to ar­gue in­escapably that cre­at­ing su­per­in­tel­li­gence was the right thing to do.

Well (said Eliezer1997) let’s be­gin by ask­ing the ques­tion: Does life have, in fact, any mean­ing?

“I don’t know,” replied Eliezer1997 at once, with a cer­tain note of self-con­grat­u­la­tion for ad­mit­ting his own ig­no­rance on this topic where so many oth­ers seemed cer­tain.

“But,” he went on—

(Always be wary when an ad­mis­sion of ig­no­rance is fol­lowed by “But”.)

“But, if we sup­pose that life has no mean­ing—that the util­ity of all out­comes is equal to zero—that pos­si­bil­ity can­cels out of any ex­pected util­ity calcu­la­tion. We can there­fore always act as if life is known to be mean­ingful, even though we don’t know what that mean­ing is. How can we find out that mean­ing? Con­sid­er­ing that hu­mans are still ar­gu­ing about this, it’s prob­a­bly too difficult a prob­lem for hu­mans to solve. So we need a su­per­in­tel­li­gence to solve the prob­lem for us. As for the pos­si­bil­ity that there is no log­i­cal jus­tifi­ca­tion for one prefer­ence over an­other, then in this case it is no righter or wronger to build a su­per­in­tel­li­gence, than to do any­thing else. This is a real pos­si­bil­ity, but it falls out of any at­tempt to calcu­late ex­pected util­ity—we should just ig­nore it. To the ex­tent some­one says that a su­per­in­tel­li­gence would wipe out hu­man­ity, they are ei­ther ar­gu­ing that wiping out hu­man­ity is in fact the right thing to do (even though we see no rea­son why this should be the case) or they are ar­gu­ing that there is no right thing to do (in which case their ar­gu­ment that we should not build in­tel­li­gence defeats it­self).”

Ergh. That was a re­ally difficult para­graph to write. My past self is always my own most con­cen­trated Kryp­tonite, be­cause my past self is ex­actly pre­cisely all those things that the mod­ern me has in­stalled aller­gies to block. Truly is it said that par­ents do all the things they tell their chil­dren not to do, which is how they know not to do them; it ap­plies be­tween past and fu­ture selves as well.

How flawed is Eliezer1997′s ar­gu­ment? I couldn’t even count the ways. I know mem­ory is fal­lible, re­con­structed each time we re­call, and so I don’t trust my as­sem­bly of these old pieces us­ing my mod­ern mind. Don’t ask me to read my old writ­ings; that’s too much pain.

But it seems clear that I was think­ing of util­ity as a sort of stuff, an in­her­ent prop­erty. So that “life is mean­ingless” cor­re­sponded to util­ity=0. But of course the ar­gu­ment works equally well with util­ity=100, so that if ev­ery­thing is mean­ingful but it is all equally mean­ingful, that should fall out too… Cer­tainly I wasn’t then think­ing of a util­ity func­tion as an af­fine struc­ture in prefer­ences. I was think­ing of “util­ity” as an ab­solute level of in­her­ent value.

I was think­ing of should as a kind of purely ab­stract essence of com­pel­ling­ness, that-which-makes-you-do-some­thing; so that clearly any mind that de­rived a should, would be bound by it. Hence the as­sump­tion, which Eliezer1997 did not even think to ex­plic­itly note, that a logic that com­pels an ar­bi­trary mind to do some­thing, is ex­actly the same as that which hu­man be­ings mean and re­fer to when they ut­ter the word “right”...

But now I’m try­ing to count the ways, and if you’ve been fol­low­ing along, you should be able to han­dle that your­self.

An im­por­tant as­pect of this whole failure was that, be­cause I’d proved that the case “life is mean­ingless” wasn’t worth con­sid­er­ing, I didn’t think it was nec­es­sary to rigor­ously define “in­tel­li­gence” or “mean­ing”. I’d pre­vi­ously come up with a clever rea­son for not try­ing to go all for­mal and rigor­ous when try­ing to define “in­tel­li­gence” (or “moral­ity”)—namely all the bait-and-switches that past AIfolk, philoso­phers, and moral­ists, had pul­led with defi­ni­tions that missed the point.

I draw the fol­low­ing les­son: No mat­ter how clever the jus­tifi­ca­tion for re­lax­ing your stan­dards, or evad­ing some re­quire­ment of rigor, it will blow your foot off just the same.

And an­other les­son: I was skil­led in re­fu­ta­tion. If I’d ap­plied the same level of re­jec­tion-based-on-any-flaw to my own po­si­tion, as I used to defeat ar­gu­ments brought against me, then I would have ze­roed in on the log­i­cal gap and re­jected the po­si­tion—if I’d wanted to. If I’d had the same level of prej­u­dice against it, as I’d had against other po­si­tions in the de­bate.

But this was be­fore I’d heard of Kah­ne­man, be­fore I’d heard the term “mo­ti­vated skep­ti­cism”, be­fore I’d in­te­grated the con­cept of an ex­actly cor­rect state of un­cer­tainty that sum­ma­rizes all the ev­i­dence, and be­fore I knew the dead­li­ness of ask­ing “Am I al­lowed to be­lieve?” for liked po­si­tions and “Am I forced to be­lieve?” for dis­liked po­si­tions. I was a mere Tra­di­tional Ra­tion­al­ist who thought of the sci­en­tific pro­cess as a referee be­tween peo­ple who took up po­si­tions and ar­gued them, may the best side win.

My ul­ti­mate flaw was not a lik­ing for “in­tel­li­gence”, nor any amount of technophilia and sci­ence fic­tion ex­alt­ing the sibling­hood of sen­tience. It surely wasn’t my abil­ity to spot flaws. None of these things could have led me astray, if I had held my­self to a higher stan­dard of rigor through­out, and adopted no po­si­tion oth­er­wise. Or even if I’d just scru­ti­nized my preferred vague po­si­tion, with the same de­mand-of-rigor I ap­plied to coun­ter­ar­gu­ments.

But I wasn’t much in­ter­ested in try­ing to re­fute my be­lief that life had mean­ing, since my rea­son­ing would always be dom­i­nated by cases where life did have mean­ing.

And with the Sin­gu­lar­ity at stake, I thought I just had to pro­ceed at all speed us­ing the best con­cepts I could wield at the time, not pause and shut down ev­ery­thing while I looked for a perfect defi­ni­tion that so many oth­ers had screwed up...

No.

No, you don’t use the best con­cepts you can use at the time.

It’s Na­ture that judges you, and Na­ture does not ac­cept even the most righ­teous ex­cuses. If you don’t meet the stan­dard, you fail. It’s that sim­ple. There is no clever ar­gu­ment for why you have to make do with what you have, be­cause Na­ture won’t listen to that ar­gu­ment, won’t for­give you be­cause there were so many ex­cel­lent jus­tifi­ca­tions for speed.

We all know what hap­pened to Don­ald Rums­feld, when he went to war with the army he had, in­stead of the army he needed.

Maybe Eliezer1997 couldn’t have con­jured the cor­rect model out of thin air. (Though who knows what would have hap­pened, if he’d re­ally tried...) And it wouldn’t have been pru­dent for him to stop think­ing en­tirely, un­til rigor sud­denly popped out of nowhere.

But nei­ther was it cor­rect for Eliezer1997 to put his weight down on his “best guess”, in the ab­sence of pre­ci­sion. You can use vague con­cepts in your own in­terim thought pro­cesses, as you search for a bet­ter an­swer, un­satis­fied with your cur­rent vague hints, and un­will­ing to put your weight down on them. You don’t build a su­per­in­tel­li­gence based on an in­terim un­der­stand­ing. No, not even the “best” vague un­der­stand­ing you have. That was my mis­take—think­ing that say­ing “best guess” ex­cused any­thing. There was only the stan­dard I had failed to meet.

Of course Eliezer1997 didn’t want to slow down on the way to the Sin­gu­lar­ity, with so many lives at stake, and the very sur­vival of Earth-origi­nat­ing in­tel­li­gent life, if we got to the era of nanoweapons be­fore the era of su­per­in­tel­li­gence—

Na­ture doesn’t care about such righ­teous rea­sons. There’s just the as­tro­nom­i­cally high stan­dard needed for suc­cess. Either you match it, or you fail. That’s all.

The apoc­a­lypse does not need to be fair to you.
The apoc­a­lypse does not need to offer you a chance of suc­cess
In ex­change for what you’ve already brought to the table.
The apoc­a­lypse’s difficulty is not matched to your skills.
The apoc­a­lypse’s price is not matched to your re­sources.
If the apoc­a­lypse asks you for some­thing un­rea­son­able
And you try to bar­gain it down a lit­tle
(Be­cause ev­ery­one has to com­pro­mise now and then)
The apoc­a­lypse will not try to ne­go­ti­ate back up.

And, oh yes, it gets worse.

How did Eliezer1997 deal with the ob­vi­ous ar­gu­ment that you couldn’t pos­si­bly de­rive an “ought” from pure logic, be­cause “ought” state­ments could only be de­rived from other “ought” state­ments?

Well (ob­served Eliezer1997), this prob­lem has the same struc­ture as the ar­gu­ment that a cause only pro­ceeds from an­other cause, or that a real thing can only come of an­other real thing, whereby you can prove that noth­ing ex­ists.

Thus (he said) there are three “hard prob­lems”: The hard prob­lem of con­scious ex­pe­rience, in which we see that qualia can­not arise from com­putable pro­cesses; the hard prob­lem of ex­is­tence, in which we ask how any ex­is­tence en­ters ap­par­ently from noth­ing­ness; and the hard prob­lem of moral­ity, which is to get to an “ought”.

Th­ese prob­lems are prob­a­bly linked. For ex­am­ple, the qualia of plea­sure are one of the best can­di­dates for some­thing in­trin­si­cally de­sir­able. We might not be able to un­der­stand the hard prob­lem of moral­ity, there­fore, with­out un­rav­el­ing the hard prob­lem of con­scious­ness. It’s ev­i­dent that these prob­lems are too hard for hu­mans—oth­er­wise some­one would have solved them over the last 2500 years since philos­o­phy was in­vented.

It’s not as if they could have com­pli­cated solu­tions—they’re too sim­ple for that. The prob­lem must just be out­side hu­man con­cept-space. Since we can see that con­scious­ness can’t arise on any com­putable pro­cess, it must in­volve new physics—physics that our brain uses, but can’t un­der­stand. That’s why we need su­per­in­tel­li­gence in or­der to solve this prob­lem. Prob­a­bly it has to do with quan­tum me­chan­ics, maybe with a dose of tiny closed timelike curves from out of Gen­eral Rel­a­tivity; tem­po­ral para­doxes might have some of the same ir­re­ducibil­ity prop­er­ties that con­scious­ness seems to de­mand...

Et cetera, ad nau­seam. You may be­gin to per­ceive, in the arc of my Over­com­ing Bias posts, the let­ter I wish I could have writ­ten to my­self.

Of this I learn the les­son: You can­not ma­nipu­late con­fu­sion. You can­not make clever plans to work around the holes in your un­der­stand­ing. You can’t even make “best guesses” about things which fun­da­men­tally con­fuse you, and re­late them to other con­fus­ing things. Well, you can, but you won’t get it right, un­til your con­fu­sion dis­solves. Con­fu­sion ex­ists in the mind, not in the re­al­ity, and try­ing to treat it like some­thing you can pick up and move around, will only re­sult in un­in­ten­tional com­edy.

Similarly, you can­not come up with clever rea­sons why the gaps in your model don’t mat­ter. You can­not draw a bor­der around the mys­tery, put on neat han­dles that let you use the Mys­te­ri­ous Thing with­out re­ally un­der­stand­ing it—like my at­tempt to make the pos­si­bil­ity that life is mean­ingless can­cel out of an ex­pected util­ity for­mula. You can’t pick up the gap and ma­nipu­late it.

If the blank spot on your map con­ceals a land mine, then putting your weight down on that spot will be fatal, no mat­ter how good your ex­cuse for not know­ing. Any black box could con­tain a trap, and there’s no way to know ex­cept open­ing up the black box and look­ing in­side. If you come up with some righ­teous jus­tifi­ca­tion for why you need to rush on ahead with the best un­der­stand­ing you have—the trap goes off.

It’s only when you know the rules,
That you re­al­ize why you needed to learn;
What would have hap­pened oth­er­wise,
How much you needed to know.

Only knowl­edge can foretell the cost of ig­no­rance. The an­cient al­chemists had no log­i­cal way of know­ing the ex­act rea­sons why it was hard for them to turn lead into gold. So they poi­soned them­selves and died. Na­ture doesn’t care.

But there did come a time when re­al­iza­tion be­gan to dawn on me. To be con­tinued.