The Fun Theory Sequence

(A shorter gloss of Fun The­ory is “31 Laws of Fun”, which sum­ma­rizes the ad­vice of Fun The­ory to would-be Eu­topian au­thors and fu­tur­ists.)

Fun The­ory is the field of knowl­edge that deals in ques­tions such as “How much fun is there in the uni­verse?”, “Will we ever run out of fun?”, “Are we hav­ing fun yet?” and “Could we be hav­ing more fun?”

Many crit­ics (in­clud­ing Ge­orge Or­well) have com­mented on the in­abil­ity of au­thors to imag­ine Utopias where any­one would ac­tu­ally want to live. If no one can imag­ine a Fu­ture where any­one would want to live, that may drain off mo­ti­va­tion to work on the pro­ject. The prospect of end­less bore­dom is rou­tinely fielded by con­ser­va­tives as a knock­down ar­gu­ment against re­search on lifes­pan ex­ten­sion, against cry­on­ics, against all tran­shu­man­ism, and oc­ca­sion­ally against the en­tire En­light­en­ment ideal of a bet­ter fu­ture.

Fun The­ory is also the fully gen­eral re­ply to re­li­gious theod­icy (at­tempts to jus­tify why God per­mits evil). Our pre­sent world has flaws even from the stand­point of such eu­daimonic con­sid­er­a­tions as free­dom, per­sonal re­spon­si­bil­ity, and self-re­li­ance. Fun The­ory tries to de­scribe the di­men­sions along which a benev­olently de­signed world can and should be op­ti­mized, and our pre­sent world is clearly not the re­sult of such op­ti­miza­tion. Fun The­ory also high­lights the flaws of any par­tic­u­lar re­li­gion’s perfect af­ter­life—you wouldn’t want to go to their Heaven.

Fi­nally, go­ing into the de­tails of Fun The­ory helps you see that eu­daimo­nia is com­pli­cated—that there are many prop­er­ties which con­tribute to a life worth liv­ing. Which helps you ap­pre­ci­ate just how worth­less a galaxy would end up look­ing (with very high prob­a­bil­ity) if the galaxy was op­ti­mized by some­thing with a util­ity func­tion rol­led up at ran­dom. This is part of the Com­plex­ity of Value Th­e­sis and sup­plies mo­ti­va­tion to cre­ate AIs with pre­cisely cho­sen goal sys­tems (Friendly AI).

Fun The­ory is built on top of the nat­u­ral­is­tic metaethics sum­ma­rized in Joy in the Merely Good; as such, its ar­gu­ments ground in “On re­flec­tion, don’t you think this is what you would ac­tu­ally want for your­self and oth­ers?”

Posts in the Fun The­ory se­quence (re­or­ga­nized by topic, not nec­es­sar­ily in the origi­nal chronolog­i­cal or­der):

  • Prole­gom­ena to a The­ory of Fun: Fun The­ory is an at­tempt to ac­tu­ally an­swer ques­tions about eter­nal bore­dom that are more of­ten posed and left hang­ing. At­tempts to vi­su­al­ize Utopia are of­ten defeated by stan­dard bi­ases, such as the at­tempt to imag­ine a sin­gle mo­ment of good news (“You don’t have to work any­more!”) rather than a typ­i­cal mo­ment of daily life ten years later. Peo­ple also be­lieve they should en­joy var­i­ous ac­tivi­ties that they ac­tu­ally don’t. But since hu­man val­ues have no su­per­nat­u­ral source, it is quite rea­son­able for us to try to un­der­stand what we want. There is no ex­ter­nal au­thor­ity tel­ling us that the fu­ture of hu­man­ity should not be fun.

  • High Challenge: Life should not always be made eas­ier for the same rea­son that video games should not always be made eas­ier. Think in terms of elimi­nat­ing low-qual­ity work to make way for high-qual­ity work, rather than elimi­nat­ing all challenge. One needs games that are fun to play and not just fun to win. Life’s util­ity func­tion is over 4D tra­jec­to­ries, not just 3D out­comes. Values can le­gi­t­i­mately be over the sub­jec­tive ex­pe­rience, the ob­jec­tive re­sult, and the challeng­ing pro­cess by which it is achieved—the trav­el­ler, the des­ti­na­tion and the jour­ney.

  • Com­plex Novelty: Are we likely to run out of new challenges, and be re­duced to play­ing the same video game over and over? How large is Fun Space? This de­pends on how fast you learn; the faster you gen­er­al­ize, the more challenges you see as similar to each other. Learn­ing is fun, but uses up fun; you can’t have the same stroke of ge­nius twice. But the more in­tel­li­gent you are, the more po­ten­tial in­sights you can un­der­stand; hu­man Fun Space is larger than chim­panzee Fun Space, and not just by a lin­ear fac­tor of our brain size. In a well-lived life, you may need to in­crease in in­tel­li­gence fast enough to in­te­grate your ac­cu­mu­lat­ing ex­pe­riences. If so, the rate at which new Fun be­comes available to in­tel­li­gence, is likely to over­whelm­ingly swamp the amount of time you could spend at that fixed level of in­tel­li­gence. The Busy Beaver se­quence is an in­finite se­ries of deep in­sights not re­ducible to each other or to any more gen­eral in­sight.

  • Con­tin­u­ous Im­prove­ment: Hu­mans seem to be on a he­do­nic tread­mill; over time, we ad­just to any im­prove­ments in our en­vi­ron­ment—af­ter a month, the new sports car no longer seems quite as won­der­ful. This as­pect of our evolved psy­chol­ogy is not sur­pris­ing: is a rare or­ganism in a rare en­vi­ron­ment whose op­ti­mal re­pro­duc­tive strat­egy is to rest with a smile on its face, feel­ing happy with what it already has. To en­tirely delete the he­do­nic tread­mill seems per­ilously close to tam­per­ing with Bore­dom it­self. Is there enough fun in the uni­verse for a tran­shu­man to jog off the tread­mill—im­prove their life con­tin­u­ously, leap­ing to ever-higher he­do­nic lev­els be­fore ad­just­ing to the pre­vi­ous one? Can ever-higher lev­els of plea­sure be cre­ated by the sim­ple in­crease of ever-larger float­ing-point num­bers in a digi­tal plea­sure cen­ter, or would that fail to have the full sub­jec­tive qual­ity of hap­piness? If we con­tinue to bind our plea­sures to novel challenges, can we find higher lev­els of plea­sure fast enough, with­out cheat­ing? The rate at which value can in­crease as more bits are added, and the rate at which value must in­crease for eu­daimo­nia, to­gether de­ter­mine the lifes­pan of a mind. If minds must use ex­po­nen­tially more re­sources over time in or­der to lead a eu­daimonic ex­is­tence, their sub­jec­tive lifes­pan is mea­sured in mere mil­len­nia even if they can draw on galaxy-sized re­sources.

  • Sen­sual Ex­pe­rience: Much of the anomie and dis­con­nect in mod­ern so­ciety can be at­tributed to our spend­ing all day on tasks (like office work) that we didn’t evolve to perform (un­like hunt­ing and gath­er­ing on the sa­vanna). Thus, many of the tasks we perform all day do not en­gage our senses—even the most re­al­is­tic mod­ern video game is not the same level of sen­sual ex­pe­rience as out­run­ning a real tiger on the real sa­vanna. Even the best mod­ern video game is low-band­width fun—a low-band­width con­nec­tion to a rel­a­tively sim­ple challenge, which doesn’t fill our brains well as a re­sult. But fu­ture en­tities could have differ­ent senses and higher-band­width con­nec­tions to more com­pli­cated challenges, even if those challenges didn’t ex­ist on the sa­vanna.

  • Liv­ing By Your Own Strength: Our hunter-gath­erer an­ces­tors strung their own bows, wove their own bas­kets and whit­tled their own flutes. Part of our aliena­tion from our de­sign en­vi­ron­ment is the num­ber of tools we use that we don’t un­der­stand and couldn’t make for our­selves. It’s much less fun to read some­thing in a book than to dis­cover it for your­self. Spe­cial­iza­tion is crit­i­cal to our cur­rent civ­i­liza­tion. But the fu­ture does not have to be a con­tinu­a­tion of this trend in which we rely more and more on things out­side our­selves which be­come less and less com­pre­hen­si­ble. With a sur­plus of power, you could be­gin to re­think the life ex­pe­rience as a road to in­ter­nal­iz­ing new strengths, not just stay­ing al­ive effi­ciently through ex­treme spe­cial­iza­tion.

  • Free to Op­ti­mize: Stare de­ci­sis is the le­gal prin­ci­ple which binds courts to fol­low prece­dent. The ra­tio­nale is not that past courts were wiser, but ju­rispru­dence con­stante: The le­gal sys­tem must be pre­dictable so that peo­ple can im­ple­ment con­tracts and be­hav­iors know­ing their im­pli­ca­tions. The pur­pose of law is not to make the world perfect, but to provide a pre­dictable en­vi­ron­ment in which peo­ple can op­ti­mize their own fu­tures. If an ex­tremely pow­er­ful en­tity is choos­ing good fu­tures on your be­half, that may leave lit­tle slack for you to nav­i­gate through your own strength. De­scribing how an AI can avoid stomp­ing your self-de­ter­mi­na­tion is a struc­turally com­pli­cated prob­lem. A sim­ple (pos­si­bly not best) solu­tion would be the gift of a world that works by im­proved rules, sta­ble enough that the in­hab­itants could un­der­stand them and op­ti­mize their own fu­tures to­gether, but oth­er­wise hands-off. Modern le­gal sys­tems fail along this di­men­sion; no one can pos­si­bly know all the laws, let alone obey them.

  • Harm­ful Op­tions: Offer­ing peo­ple more choices that differ along many di­men­sions, may diminish their satis­fac­tion with their fi­nal choice. Losses are more painful than the cor­re­spond­ing gains are plea­surable, so peo­ple think of the di­men­sions along which their fi­nal choice was in­fe­rior, and of all the other op­por­tu­ni­ties passed up. If you can only choose one dessert, you’re likely to be hap­pier choos­ing from a menu of two than from a menu of four­teen. Re­fus­ing tempt­ing choices con­sumes men­tal en­ergy and de­creases perfor­mance on other cog­ni­tive tasks. A video game that con­tained an always-visi­ble eas­ier route through, would prob­a­bly be less fun to play even if that eas­ier route were de­liber­ately fore­gone. You can imag­ine a Devil who fol­lows some­one around, mak­ing their life mis­er­able, solely by offer­ing them op­tions which are never ac­tu­ally taken. And what if a worse op­tion is taken due to a pre­dictable mis­take? There are many ways to harm peo­ple by offer­ing them more choices.

  • Devil’s Offers: It is dan­ger­ous to live in an en­vi­ron­ment in which a sin­gle failure of re­solve, through­out your en­tire life, can re­sult in a per­ma­nent ad­dic­tion or in a poor edit of your own brain. For ex­am­ple, a civ­i­liza­tion which is con­stantly offer­ing peo­ple tempt­ing ways to shoot off their own feet—for ex­am­ple, offer­ing them a cheap es­cape into eter­nal vir­tual re­al­ity, or cus­tomized drugs. It re­quires a con­stant stern will that may not be much fun. And it’s ques­tion­able whether a su­per­in­tel­li­gence that de­scends from above to offer peo­ple huge dan­ger­ous temp­ta­tions that they wouldn’t en­counter on their own, is helping.

  • Non­per­son Pred­i­cates, Non­sen­tient Op­ti­miz­ers, Can’t Un­birth a Child: Dis­cusses some of the prob­lems of, and jus­tifi­ca­tion for, cre­at­ing AIs that are know­ably not con­scious /​ sen­tient /​ peo­ple /​ cit­i­zens /​ sub­jec­tive ex­pe­riencers. We don’t want the AI’s mod­els of peo­ple to be peo­ple—we don’t want con­scious minds trapped hel­plessly in­side it. So we need how to tell that some­thing is definitely not a per­son, and in this case, maybe we would like the AI it­self to not be a per­son, which would sim­plify a lot of eth­i­cal is­sues if we could pull it off. Creat­ing a new in­tel­li­gent species is not lightly to be un­der­taken from a purely eth­i­cal per­spec­tive; if you cre­ate a new kind of per­son, you have to make sure it leads a life worth liv­ing.

  • Am­pu­ta­tion of Destiny: C. S. Lewis’s Nar­nia has a prob­lem, and that prob­lem is the su­per-lion As­lan—who de­motes the four hu­man chil­dren from the sta­tus of main char­ac­ters, to mere hang­ers-on while As­lan does all the work. Iain Banks’s Cul­ture nov­els have a similar prob­lem; the hu­mans are mere hang­ers-on of the su­per­in­tel­li­gent Minds. We already have strong eth­i­cal rea­sons to pre­fer to cre­ate non­sen­tient AIs rather than sen­tient AIs, at least at first. But we may also pre­fer in just a fun-the­o­retic sense that we not be over­shad­owed by hugely more pow­er­ful en­tities oc­cu­py­ing a level play­ing field with us. En­tities with hu­man emo­tional make­ups should not be com­pet­ing on a level play­ing field with su­per­in­tel­li­gences—ei­ther keep the su­per­in­tel­li­gences off the play­ing field, or de­sign the smaller (hu­man-level) minds with a differ­ent emo­tional makeup that doesn’t mind be­ing over­shad­owed.

  • Dun­bar’s Func­tion: Robin Dun­bar’s origi­nal calcu­la­tion showed that the max­i­mum hu­man group size was around 150. But a typ­i­cal size for a hunter-gath­erer band would be 30-50, co­he­sive on­line groups peak at 50-60, and small task forces may peak in in­ter­nal co­he­sive­ness around 7. Our at­tempt to live in a world of six billion peo­ple has many emo­tional costs: We aren’t likely to know our Pres­i­dent or Prime Minister, or to have any sig­nifi­cant in­fluence over our coun­try’s poli­tics, al­though we go on be­hav­ing as if we did. We are con­stantly bom­barded with news about im­prob­a­bly pretty and wealthy in­di­vi­d­u­als. We aren’t likely to find a sig­nifi­cant pro­fes­sion where we can be the best in our field. But if in­tel­li­gence keeps in­creas­ing, the num­ber of per­sonal re­la­tion­ships we can track will also in­crease, along with the nat­u­ral de­gree of spe­cial­iza­tion. Even­tu­ally there might be a sin­gle com­mu­nity of sen­tients that re­ally was a sin­gle com­mu­nity.

  • In Praise of Bore­dom: “Bore­dom” is an im­mensely sub­tle and im­por­tant as­pect of hu­man val­ues, nowhere near as straight­for­ward as it sounds to a hu­man. We don’t want to get bored with breath­ing or with think­ing. We do want to get bored with play­ing the same level of the same video game over and over. We don’t want chang­ing the shade of the pix­els in the game to make it stop count­ing as “the same game”. We want a steady stream of nov­elty, rather than spend­ing most of our time play­ing the best video game level so far dis­cov­ered (over and over) and oc­ca­sion­ally try­ing out a differ­ent video game level as a new can­di­date for “best”. Th­ese con­sid­er­a­tions would not arise in most util­ity func­tions in ex­pected util­ity max­i­miz­ers.

  • Sym­pa­thetic Minds: Mir­ror neu­rons are neu­rons that fire both when perform­ing an ac­tion one­self, and watch­ing some­one else perform the same ac­tion—for ex­am­ple, a neu­ron that fires when you raise your hand or watch some­one else raise theirs. We pre­dic­tively model other minds by putting our­selves in their shoes, which is em­pa­thy. But some of our de­sire to help rel­a­tives and friends, or be con­cerned with the feel­ings of al­lies, is ex­pressed as sym­pa­thy, feel­ing what (we be­lieve) they feel. Like “bore­dom”, the hu­man form of sym­pa­thy would not be ex­pected to arise in an ar­bi­trary ex­pected-util­ity-max­i­miz­ing AI. Most such agents would re­gard any agents in its en­vi­ron­ment as a spe­cial case of com­plex sys­tems to be mod­eled or op­ti­mized; it would not feel what they feel.

  • In­ter­per­sonal En­tan­gle­ment: Our sym­pa­thy with other minds makes our in­ter­per­sonal re­la­tion­ships one of the most com­plex as­pects of hu­man ex­is­tence. Ro­mance, in par­tic­u­lar, is more com­pli­cated than be­ing nice to friends and kin, ne­go­ti­at­ing with al­lies, or out­smart­ing en­e­mies—it con­tains as­pects of all three. Re­plac­ing hu­man ro­mance with any­thing sim­pler or eas­ier would de­crease the peak com­plex­ity of the hu­man species—a ma­jor step in the wrong di­rec­tion, it seems to me. This is my prob­lem with pro­pos­als to give peo­ple perfect, non­sen­tient sex­ual/​ro­man­tic part­ners, which I usu­ally re­fer to as “cat­girls” (“cat­boys”). The hu­man species does have a statis­ti­cal sex prob­lem: evolu­tion has not op­ti­mized the av­er­age man to make the av­er­age woman happy or vice versa. But there are less sad ways to solve this prob­lem than both gen­ders giv­ing up on each other and re­treat­ing to cat­girls/​cat­boys.

  • Failed Utopia #4-2: A fic­tional short story illus­trat­ing some of the ideas in In­ter­per­sonal En­tan­gle­ment above. (Many com­menters seemed to like this story, and some said that the ideas were eas­ier to un­der­stand in this form.)

  • Grow­ing Up is Hard: Each piece of the hu­man brain is op­ti­mized on the as­sump­tion that all the other pieces are work­ing the same way they did in the an­ces­tral en­vi­ron­ment. Sim­ple neu­ro­trans­mit­ter im­bal­ances can re­sult in psy­chosis, and some as­pects of Willi­ams Syn­drome are prob­a­bly due to hav­ing a frontal cor­tex that is too large rel­a­tive to the rest of the brain. Evolu­tion cre­ates limited ro­bust­ness, but of­ten step­ping out­side the an­ces­tral pa­ram­e­ter box just breaks things. Even if the first change works, the sec­ond and third changes are less likely to work as the to­tal pa­ram­e­ters get less an­ces­tral and the brain’s tol­er­ance is used up. A cleanly de­signed AI might im­prove it­self to the point where it was smart enough to un­ravel and aug­ment the hu­man brain. Or up­loads might be able to make them­selves smart enough to solve the in­creas­ingly difficult prob­lem of not go­ing slowly, sub­tly in­sane. Nei­ther path is easy. There seems to be an ir­re­ducible resi­due of dan­ger and difficulty as­so­ci­ated with an adult ver­sion of hu­mankind ever com­ing into be­ing. Be­ing a tran­shu­man­ist means want­ing cer­tain things; it doesn’t mean you think those things are easy.

  • Chang­ing Emo­tions: Creat­ing new emo­tions seems like a de­sir­able as­pect of many parts of Fun The­ory, but this is not to be triv­ially pos­tu­lated. It’s the sort of thing best done with su­per­in­tel­li­gent help, and slowly and con­ser­va­tively even then. We can illus­trate these difficul­ties by try­ing to trans­late the short English phrase “change sex” into a cog­ni­tive trans­for­ma­tion of ex­traor­di­nary com­plex­ity and many hid­den sub­prob­lems.

  • Emo­tional In­volve­ment: Since the events in video games have no ac­tual long-term con­se­quences, play­ing a video game is not likely to be nearly as emo­tion­ally in­volv­ing as much less dra­matic events in real life. The sup­posed Utopia of play­ing lots of cool video games for­ever, is life as a se­ries of dis­con­nected epi­sodes with no last­ing con­se­quences. Our cur­rent emo­tions are bound to ac­tivi­ties that were sub­goals of re­pro­duc­tion in the an­ces­tral en­vi­ron­ment—but we now pur­sue these ac­tivi­ties as in­de­pen­dent goals re­gard­less of whether they lead to re­pro­duc­tion. (Sex with birth con­trol is the clas­sic ex­am­ple.) A tran­shu­man ex­is­tence would need new emo­tions suited to the im­por­tant short-term and long-term events of that ex­is­tence.

  • Se­ri­ous Sto­ries: Sto­ries and lives are op­ti­mized ac­cord­ing to rather differ­ent crite­ria. Ad­vice on how to write fic­tion will tell you that “sto­ries are about peo­ple’s pain” and “ev­ery scene must end in dis­aster”. I once as­sumed that it was not pos­si­ble to write any story about a suc­cess­ful Sin­gu­lar­ity be­cause the in­hab­itants would not be in any pain; but some­thing about the fi­nal con­clu­sion that the post-Sin­gu­lar­ity world would con­tain no sto­ries worth tel­ling seemed alarm­ing. Sto­ries in which noth­ing ever goes wrong, are painful to read; would a life of end­less suc­cess have the same painful qual­ity? If so, should we sim­ply elimi­nate that re­vul­sion via neu­ral rewiring? Plea­sure prob­a­bly does re­tain its mean­ing in the ab­sence of pain to con­trast it; they are differ­ent neu­ral sys­tems. The pre­sent world has an im­bal­ance be­tween pain and plea­sure; it is much eas­ier to pro­duce se­vere pain than cor­re­spond­ingly in­tense plea­sure. One path would be to ad­dress the im­bal­ance and cre­ate a world with more plea­sures, and free of the more grind­ingly de­struc­tive and pointless sorts of pain. Another ap­proach would be to elimi­nate pain en­tirely. I feel like I pre­fer the former ap­proach, but I don’t know if it can last in the long run.

  • Eu­topia is Scary: If a cit­i­zen of the Past were dropped into the Pre­sent world, they would be pleas­antly sur­prised along at least some di­men­sions; they would also be hor­rified, dis­gusted, and fright­ened. This is not be­cause our world has gone wrong, but be­cause it has gone right. A true Fu­ture gone right would, re­al­is­ti­cally, be shock­ing to us along at least some di­men­sions. This may help ex­plain why most liter­ary Utopias fail; as Ge­orge Or­well ob­served, “they are chiefly con­cerned with avoid­ing fuss”. Heav­ens are meant to sound like good news; poli­ti­cal utopias are meant to show how neatly their un­der­ly­ing ideas work. Utopia is re­as­sur­ing, un­sur­pris­ing, and dull. Eu­topia would be scary. (Of course the vast ma­jor­ity of scary things are not Eu­topian, just en­tropic.) Try to imag­ine a gen­uinely bet­ter world in which you would be out of placenot a world that would make you smugly satis­fied at how well all your cur­rent ideas had worked. This proved to be a very im­por­tant ex­er­cise when I tried it; it made me re­al­ize that all my old pro­pos­als had been op­ti­mized to sound safe and re­as­sur­ing.

  • Build­ing Weird­topia: Utopia and Dystopia both con­firm the moral sen­si­bil­ities you started with; whether the world is a liber­tar­ian utopia of gov­ern­ment non-in­terfer­ence, or a hellish dystopia of gov­ern­ment in­tru­sion and reg­u­la­tion, ei­ther way you get to say “Guess I was right all along.” To break out of this mold, write down the Utopia, and the Dystopia, and then try to write down the Weird­topia—an ar­guably-bet­ter world that zogs in­stead of zig­ging or zag­ging. (Judg­ing from the com­ments, this ex­er­cise seems to have mostly failed.)

  • Jus­tified Ex­pec­ta­tion of Pleas­ant Sur­prises: A pleas­ant sur­prise prob­a­bly has a greater he­do­nic im­pact than be­ing told about the same pos­i­tive event long in ad­vance—hear­ing about the pos­i­tive event is good news in the mo­ment of first hear­ing, but you don’t have the gift ac­tu­ally in hand. Then you have to wait, per­haps for a long time, pos­si­bly com­par­ing the ex­pected plea­sure of the fu­ture to the lesser plea­sure of the pre­sent. This ar­gues that if you have a choice be­tween a world in which the same pleas­ant events oc­cur, but in the first world you are told about them long in ad­vance, and in the sec­ond world they are kept se­cret un­til they oc­cur, you would pre­fer to live in the sec­ond world. The im­por­tance of hope is widely ap­pre­ci­ated—peo­ple who do not ex­pect their lives to im­prove in the fu­ture are less likely to be happy in the pre­sent—but the im­por­tance of vague hope may be un­der­stated.

  • Se­duced by Imag­i­na­tion: Vague­ness usu­ally has a poor name in ra­tio­nal­ity, but the Fu­ture is some­thing about which, in fact, we do not pos­sess strong re­li­able spe­cific in­for­ma­tion. Vague (but jus­tified!) hopes may also be he­do­nically bet­ter. But a more im­por­tant cau­tion for to­day’s world is that highly spe­cific pleas­ant sce­nar­ios can ex­ert a dan­ger­ous power over hu­man minds—suck out our emo­tional en­ergy, make us for­get what we don’t know, and cause our mere ac­tual lives to pale by com­par­i­son. (This post is not about Fun The­ory proper, but it con­tains an im­por­tant warn­ing about how not to use Fun The­ory.)

  • The Uses of Fun (The­ory): Fun The­ory is im­por­tant for re­ply­ing to crit­ics of hu­man progress; for in­spiring peo­ple to keep work­ing on hu­man progress; for re­fut­ing re­li­gious ar­gu­ments that the world could pos­si­bly have been benev­olently de­signed; for show­ing that re­li­gious Heav­ens show the sig­na­ture of the same hu­man bi­ases that tor­pedo other at­tempts at Utopia; and for ap­pre­ci­at­ing the great com­plex­ity of our val­ues and of a life worth liv­ing, which re­quires a cor­re­spond­ingly strong effort of AI de­sign to cre­ate AIs that can play good roles in a good fu­ture.

  • Higher Pur­pose: Hav­ing a Pur­pose in Life con­sis­tently shows up as some­thing that in­creases stated well-be­ing. Of course, the prob­lem with try­ing to pick out “a Pur­pose in Life” in or­der to make your­self hap­pier, is that this doesn’t take you out­side your­self; it’s still all about you. To find pur­pose, you need to turn your eyes out­ward to look at the world and find things there that you care about—rather than ob­sess­ing about the won­der­ful spiritual benefits you’re get­ting from helping oth­ers. In to­day’s world, most of the high­est-pri­or­ity le­gi­t­i­mate Causes con­sist of large groups of peo­ple in ex­treme jeop­ardy: Aging threat­ens the old, star­va­tion threat­ens the poor, ex­tinc­tion risks threaten hu­man­ity as a whole. If the fu­ture goes right, many and per­haps all such prob­lems will be solved—de­plet­ing the stream of vic­tims to be helped. Will the fu­ture there­fore con­sist of self-ob­sessed in­di­vi­d­u­als, with noth­ing to take them out­side them­selves? I sug­gest, though, that even if there were no large groups of peo­ple in ex­treme jeop­ardy, we would still, look­ing around, find things out­side our­selves that we cared about—friends, fam­ily; truth, free­dom… Nonethe­less, if the Fu­ture goes suffi­ciently well, there will come a time when you could search the whole of civ­i­liza­tion, and never find a sin­gle per­son so much in need of help, as dozens you now pass on the street. If you do want to save some­one from death, or help a great many peo­ple, then act now; your op­por­tu­nity may not last, one way or an­other.