The Up-Goer Five Game: Explaining hard ideas with simple words

xkcd’s Up-Goer Five comic gave tech­ni­cal speci­fi­ca­tions for the Saturn V rocket us­ing only the 1,000 most com­mon words in the English lan­guage.

This seemed to me and Briénne to be a re­ally fun ex­er­cise, both for taboo­ing one’s words and for com­mu­ni­cat­ing difficult con­cepts to laypeo­ple. So why not make a game out of it? Pick any tough, im­por­tant, or in­ter­est­ing ar­gu­ment or idea, and use this text ed­i­tor to try to de­scribe what you have in mind with ex­tremely com­mon words only.

This is challeng­ing, so if you al­most suc­ceed and want to share your re­sults, you can mark words where you had to cheat in *ital­ics*. Bonus points if your ex­pla­na­tion is ac­tu­ally use­ful for gain­ing a deeper un­der­stand­ing of the idea, or for teach­ing it, in the spirit of Gödel’s Se­cond In­com­plete­ness The­o­rem Ex­plained in Words of One Syl­la­ble.

As an ex­am­ple, here’s my at­tempt to cap­ture the five the­ses us­ing only top-thou­sand words:

  • In­tel­li­gence ex­plo­sion: If we make a com­puter that is good at do­ing hard things in lots of differ­ent situ­a­tions with­out us­ing much stuff up, it may be able to help us build bet­ter com­put­ers. Since com­put­ers are faster than hu­mans, pretty soon the com­puter would prob­a­bly be do­ing most of the work of mak­ing new and bet­ter com­put­ers. We would have a hard time con­trol­ling or un­der­stand­ing what was hap­pen­ing as the new com­put­ers got faster and grew more and more parts. By the time these com­put­ers ran out of ways to quickly and eas­ily make bet­ter com­put­ers, the best com­put­ers would have already be­come much much bet­ter than hu­mans at con­trol­ling what hap­pens.

  • Orthog­o­nal­ity: Differ­ent com­put­ers, and differ­ent minds as a whole, can want very differ­ent things. They can want things that are very good for hu­mans, or very bad, or any­thing in be­tween. We can be pretty sure that strong com­put­ers won’t think like hu­mans, and most pos­si­ble com­put­ers won’t try to change the world in the way a hu­man would.

  • Con­ver­gent in­stru­men­tal goals: Although most pos­si­ble minds want differ­ent things, they need a lot of the same things to get what they want. A com­puter and a hu­man might want things that in the long run have noth­ing to do with each other, but have to fight for the same share of stuff first to get those differ­ent things.

  • Com­plex­ity of value: It would take a huge num­ber of parts, all put to­gether in just the right way, to build a com­puter that does all the things hu­mans want it to (and none of the things hu­mans don’t want it to).

  • Frag­ility of value: If we get a few of those parts a lit­tle bit wrong, the com­puter will prob­a­bly make only bad things hap­pen from then on. We need al­most ev­ery­thing we want to hap­pen, or we won’t have any fun.

If you make a re­ally strong com­puter and it is not very nice, you will not go to space to­day.

Other ideas to start with: agent, akra­sia, Bayes’ the­o­rem, Bayesi­anism, CFAR, cog­ni­tive bias, con­se­quen­tial­ism, de­on­tol­ogy, effec­tive al­tru­ism, Everett-style (‘Many Wor­lds’) in­ter­pre­ta­tions of quan­tum me­chan­ics, en­tropy, evolu­tion, the Great Re­duc­tion­ist Th­e­sis, halt­ing prob­lem, hu­man­ism, law of na­ture, LessWrong, logic, math­e­mat­ics, the mea­sure­ment prob­lem, MIRI, New­comb’s prob­lem, New­ton’s laws of mo­tion, op­ti­miza­tion, Pas­cal’s wa­ger, philos­o­phy, prefer­ence, proof, ra­tio­nal­ity, re­li­gion, sci­ence, Shan­non in­for­ma­tion, sig­nal­ing, the simu­la­tion ar­gu­ment, sin­gu­lar­ity, so­ciopa­thy, the su­per­nat­u­ral, su­per­po­si­tion, time, time­less de­ci­sion the­ory, trans­finite num­bers, Tur­ing ma­chine, util­i­tar­i­anism, val­idity and sound­ness, virtue ethics, VNM-utility