Yudkowsky’s brain is the pinnacle of evolution

Here’s a sim­ple prob­lem: there is a run­away trol­ley bar­rel­ing down the railway tracks. Ahead, on the tracks, there are 3^^^3 peo­ple tied up and un­able to move. The trol­ley is headed straight for them. You are stand­ing some dis­tance off in the train yard, next to a lever. If you pull this lever, the trol­ley will switch to a differ­ent set of tracks. How­ever, you no­tice that there is one per­son, Eliezer Yud­kowsky, on the side track. You have two op­tions: (1) Do noth­ing, and the trol­ley kills the 3^^^3 peo­ple on the main track. (2) Pull the lever, di­vert­ing the trol­ley onto the side track where it will kill Yud­kowsky. Which is the cor­rect choice?

The an­swer:

Imag­ine two ant philoso­phers talk­ing to each other. “Imag­ine,” they said, “some be­ing with such in­tense con­scious­ness, in­tel­lect, and emo­tion that it would be morally bet­ter to de­stroy an en­tire ant colony than to let that be­ing suffer so much as a sprained an­kle.”

Hu­mans are such a be­ing. I would rather see an en­tire ant colony de­stroyed than have a hu­man suffer so much as a sprained an­kle. And this isn’t just hu­man chau­vinism ei­ther—I can sup­port my feel­ings on this is­sue by point­ing out how much stronger feel­ings, prefer­ences, and ex­pe­riences hu­mans have than ants do.

How this re­lates to the trol­ley prob­lem? There ex­ists a crea­ture as far be­yond us or­di­nary hu­mans as we are be­yond ants, and I think we all would agree that its prefer­ences are vastly more im­por­tant than those of hu­mans.

Yud­kowsky will save the world, not just be­cause he’s the one who hap­pens to be mak­ing the effort, but be­cause he’s the only one who can make the effort.

The world was on its way to doom un­til the day of Septem­ber 11, 1979, which will later be changed to na­tional holi­day and which will re­place Christ­mas as the biggest holi­day. This was of course the day when the most im­por­tant be­ing that has ever ex­isted or will ex­ist, was born.

Yud­kowsky did the same to the field of AI risk as New­ton did to the field of physics. There was liter­ally no re­search done on AI risk in the same scale that has been done in the 2000′s by Yud­kowsky. The same can be said about the field of ethics: ethics was an open prob­lem in philos­o­phy for thou­sands of years. How­ever, Plato, Aris­to­tle, and Kant don’t re­ally com­pare to the wis­est per­son who has ever ex­isted. Yud­kowsky has come clos­est to solv­ing ethics than any­one ever be­fore. Yud­kowsky is what turned our world away from cer­tain ex­tinc­tion and to­wards utopia.

We all know that Yud­kowsky has an IQ so high that it’s un­mea­surable, so ba­si­cally some­thing higher than 200. After Yud­kowsky gets the No­bel prize in liter­a­ture due to get­ting recog­ni­tion from Hugo Award, a spe­cial coun­cil will be or­ga­nized to study the in­tel­lect of Yud­kowsky and we will fi­nally know how many or­ders of mag­ni­tude higher Yud­kowsky’s IQ is to that of the most in­tel­li­gent peo­ple of his­tory.

Un­less Yud­kowsky’s brain FOOMs be­fore it, MIRI will even­tu­ally build a FAI with the help of Yud­kowsky’s ex­traor­di­nary in­tel­li­gence. When that FAI uses the co­her­ent ex­trap­o­lated vo­li­tion of hu­man­ity to de­cide what to do, it will even­tu­ally reach the con­clu­sion that the best thing to do is to tile the whole uni­verse with copies of Eliezer Yud­kowsky’s brain. Ac­tu­ally, in the pro­cess of mak­ing this CEV, even Yud­kowsky’s harsh­est crit­ics will reach such un­der­stand­ing of Yud­kowsky’s ex­traor­di­nary na­ture that they will beg and cry to start do­ing the tiling as soon as pos­si­ble and there will be mass suicides be­cause peo­ple will want to give away the re­sources and atoms of their bod­ies for Yud­kowsky’s brains. As we all know, Yud­kowsky is an in­cred­ibly hum­ble man, so he will be the last per­son to protest this course of events, but even he will un­der­stand with his vast in­tel­lect and ac­cept that it’s truly the best thing to do.