The Uses of Fun (Theory)

“But is there any­one who ac­tu­ally wants to live in a Wel­lsian Utopia? On the con­trary, not to live in a world like that, not to wake up in a hy­genic gar­den sub­urb in­fested by naked school­marms, has ac­tu­ally be­come a con­scious poli­ti­cal mo­tive. A book like Brave New World is an ex­pres­sion of the ac­tual fear that mod­ern man feels of the ra­tio­nal­ised he­do­nis­tic so­ciety which it is within his power to cre­ate.”
—Ge­orge Or­well, Why So­cial­ists Don’t Believe in Fun

There are three rea­sons I’m talk­ing about Fun The­ory, some more im­por­tant than oth­ers:

  1. If ev­ery pic­ture ever drawn of the Fu­ture looks like a ter­rible place to ac­tu­ally live, it might tend to drain off the mo­ti­va­tion to cre­ate the fu­ture. It takes hope to sign up for cry­on­ics.

  2. Peo­ple who leave their re­li­gions, but don’t fa­mil­iarize them­selves with the deep, foun­da­tional, fully gen­eral ar­gu­ments against the­ism, are at risk of back­slid­ing. Fun The­ory lets you look at our pre­sent world, and see that it is not op­ti­mized even for con­sid­er­a­tions like per­sonal re­spon­si­bil­ity or self-re­li­ance. It is the fully gen­eral re­ply to theod­icy.

  3. Go­ing into the de­tails of Fun The­ory helps you see that eu­daimo­nia is ac­tu­ally com­pli­cated —that there are a lot of prop­er­ties nec­es­sary for a mind to lead a worth­while ex­is­tence. Which helps you ap­pre­ci­ate just how worth­less a galaxy would end up look­ing (with ex­tremely high prob­a­bil­ity) if it was op­ti­mized by some­thing with a util­ity func­tion rol­led up at ran­dom.

To am­plify on these points in or­der:

(1) You’ve got folks like Leon Kass and the other mem­bers of Bush’s “Pres­i­dent’s Coun­cil on Bioethics” run­ning around talk­ing about what a ter­rible, ter­rible thing it would be if peo­ple lived longer than three­score and ten. While some philoso­phers have pointed out the flaws in their ar­gu­ments, it’s one thing to point out a flaw and an­other to provide a coun­terex­am­ple. “Millions long for im­mor­tal­ity who do not know what to do with them­selves on a rainy Sun­day af­ter­noon,” said Su­san Ertz, and that ar­gu­ment will sound plau­si­ble for as long as you can’t imag­ine what to do on a rainy Sun­day af­ter­noon, and it seems un­likely that any­one could imag­ine it.

It’s not ex­actly the fault of Hans Mo­ravec that his world in which hu­mans are kept by su­per­in­tel­li­gences as pets, doesn’t sound quite Utopian. Utopias are just re­ally hard to con­struct, for rea­sons I’ll talk about in more de­tail later—but this ob­ser­va­tion has already been made by many, in­clud­ing Ge­orge Or­well.

Build­ing the Fu­ture is part of the ethos of sec­u­lar hu­man­ism, our com­mon pro­ject. If you have noth­ing to look for­ward to—if there’s no image of the Fu­ture that can in­spire real en­thu­si­asm—then you won’t be able to scrape up en­thu­si­asm for that com­mon pro­ject. And if the pro­ject is, in fact, a worth­while one, the ex­pected util­ity of the fu­ture will suffer ac­cord­ingly from that non­par­ti­ci­pa­tion. So that’s one side of the coin, just as the other side is liv­ing so ex­clu­sively in a fan­tasy of the Fu­ture that you can’t bring your­self to go on in the Pre­sent.

I recom­mend think­ing vaguely of the Fu­ture’s hopes, think­ing speci­fi­cally of the Past’s hor­rors, and spend­ing most of your time in the Pre­sent. This strat­egy has cer­tain epistemic virtues be­yond its use in cheer­ing your­self up.

But it helps to have le­gi­t­i­mate rea­son to vaguely hope—to min­i­mize the leaps of ab­stract op­ti­mism in­volved in think­ing that, yes, you can live and ob­tain hap­piness in the Fu­ture.

(2) Ra­tion­al­ity is our goal, and athe­ism is just a side effect—the judg­ment that hap­pens to be pro­duced. But athe­ism is an im­por­tant side effect. John C. Wright, who wrote the heav­ily tran­shu­man­ist The Golden Age, had some kind of tem­po­ral lobe epilep­tic fit and be­came a Chris­tian. There’s a once-helpful soul, now lost to us.

But it is pos­si­ble to do bet­ter, even if your brain malfunc­tions on you. I know a tran­shu­man­ist who has strong re­li­gious vi­sions, which she once at­tributed to fu­ture minds reach­ing back in time and talk­ing to her… but then she rea­soned it out, ask­ing why fu­ture su­per­minds would grant only her the so­lace of con­ver­sa­tion, and why they could offer vaguely re­as­sur­ing ar­gu­ments but not tell her win­ning lot­tery num­bers or the 900th digit of pi. So now she still has strong re­li­gious ex­pe­riences, but she is not re­li­gious. That’s the differ­ence be­tween weak ra­tio­nal­ity and strong ra­tio­nal­ity, and it has to do with the depth and gen­er­al­ity of the epistemic rules that you know and ap­ply.

Fun The­ory is part of the fully gen­eral re­ply to re­li­gion; in par­tic­u­lar, it is the fully gen­eral re­ply to theod­icy. If you can’t say how God could have bet­ter cre­ated the world with­out slid­ing into an an­ti­sep­tic Wel­lsian Utopia, you can’t carry Epicu­rus’s ar­gu­ment. If, on the other hand, you have some idea of how you could build a world that was not only more pleas­ant but also a bet­ter medium for self-re­li­ance, then you can see that per­ma­nently los­ing both your legs in a car ac­ci­dent when some­one else crashes into you, doesn’t seem very eu­daimonic.

If we can imag­ine what the world might look like if it had been de­signed by any­thing re­motely like a benev­olently in­clined su­per­a­gent, we can look at the world around us, and see that this isn’t it. This doesn’t re­quire that we cor­rectly fore­cast the full op­ti­miza­tion of a su­per­a­gent—just that we can en­vi­sion strict im­prove­ments on the pre­sent world, even if they prove not to be max­i­mal.

(3) There’s a se­vere prob­lem in which peo­ple, due to an­thro­po­mor­phic op­ti­mism and the lack of spe­cific re­flec­tive knowl­edge about their in­visi­ble back­ground frame­work and many other bi­ases which I have dis­cussed, think of a “non­hu­man fu­ture” and just sub­tract off a few as­pects of hu­man­ity that are salient, like en­joy­ing the taste of peanut but­ter or some­thing. While still en­vi­sion­ing a fu­ture filled with minds that have aes­thetic sen­si­bil­ities, ex­pe­rience hap­piness on fulfilling a task, get bored with do­ing the same thing re­peat­edly, etcetera. Th­ese things seem uni­ver­sal, rather than speci­fi­cally hu­man—to a hu­man, that is. They don’t in­volve hav­ing ten fingers or two eyes, so they must be uni­ver­sal, right?

And if you’re still in this frame of mind—where “real val­ues” are the ones that per­suade ev­ery pos­si­ble mind, and the rest is just some ex­tra speci­fi­cally hu­man stuff—then Friendly AI will seem un­nec­es­sary to you, be­cause, in its ab­sence, you ex­pect the uni­verse to be valuable but not hu­man.

It turns out, though, that once you start talk­ing about what speci­fi­cally is and isn’t valuable, even if you try to keep your­self sound­ing as “non-hu­man” as pos­si­ble—then you still end up with a big com­pli­cated com­pu­ta­tion that is only in­stan­ti­ated phys­i­cally in hu­man brains and nowhere else in the uni­verse. Com­plex challenges? Novelty? In­di­vi­d­u­al­ism? Self-aware­ness? Ex­pe­rienced hap­piness? A pa­per­clip max­i­mizer cares not about these things.

It is a long pro­ject to crack peo­ple’s brains loose of think­ing that things will turn out re­gard­less—that they can sub­tract off a few speci­fi­cally hu­man-seem­ing things, and then end up with plenty of other things they care about that are uni­ver­sal and will ap­peal to ar­bi­trar­ily con­structed AIs. And of this I have said a very great deal already. But it does not seem to be enough. So Fun The­ory is one more step—tak­ing the cur­tains off some of the in­visi­ble back­ground of our val­ues, and re­veal­ing some of the com­plex crite­ria that go into a life worth liv­ing.

Part of The Fun The­ory Sequence

Next post: “Higher Pur­pose

Pre­vi­ous post: “Se­duced by Imag­i­na­tion