Prolegomena to a Theory of Fun

Fol­lowup to: Joy in the Merely Good

Raise the topic of cry­on­ics, up­load­ing, or just med­i­cally ex­tended lifes­pan/​healthspan, and some bio­con­ser­va­tive neo-Lud­dite is bound to ask, in por­ten­tous tones:

“But what will peo­ple do all day?”

They don’t try to ac­tu­ally an­swer the ques­tion. That is not a bioethi­cist’s role, in the scheme of things. They’re just there to col­lect credit for the Deep Wis­dom of ask­ing the ques­tion. It’s enough to im­ply that the ques­tion is unan­swer­able, and there­fore, we should all drop dead.

That doesn’t mean it’s a bad ques­tion.

It’s not an easy ques­tion to an­swer, ei­ther. The pri­mary ex­per­i­men­tal re­sult in he­do­nic psy­chol­ogy—the study of hap­piness—is that peo­ple don’t know what makes them happy.

And there are many ex­cit­ing re­sults in this new field, which go a long way to­ward ex­plain­ing the empti­ness of clas­si­cal Utopias. But it’s worth re­mem­ber­ing that hu­man he­do­nic psy­chol­ogy is not enough for us to con­sider, if we’re ask­ing whether a mil­lion-year lifes­pan could be worth liv­ing.

Fun The­ory, then, is the field of knowl­edge that would deal in ques­tions like:

  • “How much fun is there in the uni­verse?”

  • “Will we ever run out of fun?”

  • “Are we hav­ing fun yet?”

  • “Could we be hav­ing more fun?”

One ma­jor set of ex­per­i­men­tal re­sults in he­do­nic psy­chol­ogy has to do with over­es­ti­mat­ing the im­pact of life events on hap­piness. Six months af­ter the event, lot­tery win­ners aren’t as happy as they ex­pected to be, and quadriplegics aren’t as sad. A par­ent who loses a child isn’t as sad as they think they’ll be, a few years later. If you look at one mo­ment snap­shot­ted out of their lives a few years later, that mo­ment isn’t likely to be about the lost child. Maybe they’re play­ing with one of their sur­viv­ing chil­dren on a swing. Maybe they’re just listen­ing to a nice song on the ra­dio.

When peo­ple are asked to imag­ine how happy or sad an event will make them, they an­chor on the mo­ment of first re­ceiv­ing the news, rather than re­al­is­ti­cally imag­in­ing the pro­cess of daily life years later.

Con­sider what the Chris­ti­ans made of their Heaven, meant to be liter­ally eter­nal. End­less rest, the glo­ri­ous pres­ence of God, and oc­ca­sion­ally—in the more clue­less sort of ser­mon—golden streets and di­a­mond build­ings. Is this eu­daimo­nia? It doesn’t even seem very he­do­nic.

As some­one who said his share of prayers back in his Ortho­dox Jewish child­hood up­bring­ing, I can per­son­ally tes­tify that prais­ing God is an enor­mously bor­ing ac­tivity, even if you’re still young enough to truly be­lieve in God. The part about prais­ing God is there as an ap­plause light that no one is al­lowed to con­tra­dict: it’s some­thing the­ists be­lieve they should en­joy, even though, if you ran them through an fMRI ma­chine, you prob­a­bly wouldn’t find their plea­sure cen­ters light­ing up much.

Ide­ol­ogy is one ma­jor wellspring of flawed Utopias, con­tain­ing things that the imag­iner be­lieves should be en­joyed, rather than things that would ac­tu­ally be en­joy­able.

And eter­nal rest? What could pos­si­bly be more bor­ing than eter­nal rest?

But to an ex­hausted, poverty-stricken me­dieval peas­ant, the Chris­tian Heaven sounds like good news in the mo­ment of be­ing first in­formed: You can lay down the plow and rest! For­ever! Never to work again!

It’d get bor­ing af­ter… what, a week? A day? An hour?

Heaven is not con­figured as a nice place to live. It is rather memet­i­cally op­ti­mized to be a nice place for an ex­hausted peas­ant to imag­ine. It’s not like some Chris­ti­ans ac­tu­ally got a chance to live in var­i­ous Heav­ens, and voted on how well they liked it af­ter a year, and then they kept the best one. The Par­adise that sur­vived was the one that was re­told, not lived.

Ti­mothy Feriss ob­served, “Liv­ing like a mil­lion­aire re­quires do­ing in­ter­est­ing things and not just own­ing en­vi­able things.” Golden streets and di­a­mond walls would fade swiftly into the back­ground, once ob­tained —but so long as you don’t ac­tu­ally have gold, it stays de­sir­able.

And there’s two les­sons re­quired to get past such failures; and these les­sons are in some sense op­po­site to one an­other.

The first les­son is that hu­mans are ter­rible judges of what will ac­tu­ally make them happy, in the real world and the liv­ing mo­ments. Daniel Gilbert’s Stum­bling on Hap­piness is the most fa­mous pop­u­lar in­tro­duc­tion to the re­search.

We need to be ready to cor­rect for such bi­ases—the world that is fun to live in, may not be the world that sounds good when spo­ken into our ears.

And the sec­ond les­son is that there’s noth­ing in the uni­verse out of which to con­struct Fun The­ory, ex­cept that which we want for our­selves or pre­fer to be­come.

If, in fact, you don’t like pray­ing, then there’s no higher God than your­self to tell you that you should en­joy it. We some­times do things we don’t like, but that’s still our own choice. There’s no out­side force to scold us for mak­ing the wrong de­ci­sion.

This is some­thing for tran­shu­man­ists to keep in mind—not be­cause we’re tempted to pray, of course, but be­cause there are so many other log­i­cal-sound­ing solu­tions we wouldn’t re­ally want.

The tran­shu­man­ist philoso­pher David Pearce is an ad­vo­cate of what he calls the He­donis­tic Im­per­a­tive: The eu­daimonic life is the one that is as plea­surable as pos­si­ble. So even hap­piness at­tained through drugs is good? Yes, in fact: Pearce’s motto is “Bet­ter Liv­ing Through Chem­istry”.

Or similarly: When giv­ing a small in­for­mal talk once on the Stan­ford cam­pus, I raised the topic of Fun The­ory in the post-talk min­gling. And some­one there said that his ul­ti­mate ob­jec­tive was to ex­pe­rience delta plea­sure. That’s “delta” as in the Dirac delta—roughly, an in­finitely high spike (that hap­pens to be in­te­grable). “Why?” I asked. He said, “Be­cause that means I win.”

(I replied, “How about if you get two times delta plea­sure? Do you win twice as hard?”)

In the tran­shu­man­ist lex­i­con, “or­gas­mium” refers to sim­plified brains that are just plea­sure cen­ters ex­pe­rienc­ing huge amounts of stim­u­la­tion—a hap­piness counter con­tain­ing a large num­ber, plus what­ever the min­i­mum sur­round­ing frame­work to ex­pe­rience it. You can imag­ine a whole galaxy tiled with or­gas­mium. Would this be a good thing?

And the ver­tigo-in­duc­ing thought is this—if you would pre­fer not to be­come or­gas­mium, then why should you?

Mind you, there are many rea­sons why some­thing that sounds un­preferred at first glance, might be worth a closer look. That was the first les­son. Many Chris­ti­ans think they want to go to Heaven.

But when it comes to the ques­tion, “Don’t I have to want to be as happy as pos­si­ble?” then the an­swer is sim­ply “No. If you don’t pre­fer it, why go there?”

There’s noth­ing ex­cept such prefer­ences out of which to con­struct Fun The­ory—a sec­ond look is still a look, and must still be con­structed out of prefer­ences at some level.

In the era of my fool­ish youth, when I went into an af­fec­tive death spiral around in­tel­li­gence, I thought that the mys­te­ri­ous “right” thing that any su­per­in­tel­li­gence would in­evitably do, would be to up­grade ev­ery nearby mind to su­per­in­tel­li­gence as fast as pos­si­ble. In­tel­li­gence was good; there­fore, more in­tel­li­gence was bet­ter.

Some­what later I imag­ined the sce­nario of un­limited com­put­ing power, so that no mat­ter how smart you got, you were still just as far from in­finity as ever. That got me think­ing about a jour­ney rather than a des­ti­na­tion, and al­lowed me to think “What rate of in­tel­li­gence in­crease would be fun?”

But the real break came when I nat­u­ral­ized my un­der­stand­ing of moral­ity, and value stopped be­ing a mys­te­ri­ous at­tribute of un­known ori­gins.

Then if there was no out­side light in the sky to or­der me to do things—

The thought oc­curred to me that I didn’t ac­tu­ally want to bloat up im­me­di­ately into a su­per­in­tel­li­gence, or have my world trans­formed in­stan­ta­neously and com­pletely into some­thing in­com­pre­hen­si­ble. I’d pre­fer to have it hap­pen grad­u­ally, with time to stop and smell the flow­ers along the way.

It felt like a very guilty thought, but—

But there was noth­ing higher to over­ride this prefer­ence.

In which case, if the Friendly AI pro­ject suc­ceeded, there would be a day af­ter the Sin­gu­lar­ity to wake up to, and my­self to wake up to it.

You may not see why this would be a ver­tigo-in­duc­ing con­cept. Pre­tend you’re Eliezer2003 who has spent the last seven years talk­ing about how it’s for­bid­den to try to look be­yond the Sin­gu­lar­ity—be­cause the AI is smarter than you, and if you knew what it would do, you would have to be that smart your­self—

—but what if you don’t want the world to be made sud­denly in­com­pre­hen­si­ble? Then there might be some­thing to un­der­stand, that next morn­ing, be­cause you don’t ac­tu­ally want to wake up in an in­com­pre­hen­si­ble world, any more than you ac­tu­ally want to sud­denly be a su­per­in­tel­li­gence, or turn into or­gas­mium.

I can only analo­gize the ex­pe­rience to a the­ist who’s sud­denly told that they can know the mind of God, and it turns out to be only twenty lines of Python.

You may find it hard to sym­pa­thize. Well, Eliezer1996, who origi­nally made the mis­take, was smart but method­olog­i­cally in­ept, as I’ve men­tioned a few times.

Still, ex­pect to see some out­raged com­ments on this very blog post, from com­menters who think that it’s self­ish and im­moral, and above all a failure of imag­i­na­tion, to talk about hu­man-level minds still run­ning around the day af­ter the Sin­gu­lar­ity.

That’s the frame of mind I used to oc­cupy—that the things I wanted were self­ish, and that I shouldn’t think about them too much, or at all, be­cause I would need to sac­ri­fice them for some­thing higher.

Peo­ple who talk about an ex­is­ten­tial pit of mean­ingless­ness in a uni­verse de­void of mean­ing—I’m pretty sure they don’t un­der­stand moral­ity in nat­u­ral­is­tic terms. There is ver­tigo in­volved, but it’s not the ver­tigo of mean­ingless­ness.

More like a the­ist who is fright­ened that some­day God will or­der him to mur­der chil­dren, and then he re­al­izes that there is no God and his fear of be­ing or­dered to mur­der chil­dren was moral­ity. It’s a strange re­lief, mixed with the re­al­iza­tion that you’ve been very silly, as the last rem­nant of out­rage at your own self­ish­ness fades away.

So the first step to­ward Fun The­ory is that, so far as I can tell, it looks ba­si­cally okay to make our fu­ture light cone—all the galax­ies that we can get our hands on—into a place that is fun rather than not fun.

We don’t need to trans­form the uni­verse into some­thing we feel du­tifully obli­gated to cre­ate, but isn’t re­ally much fun—in the same way that a Chris­tian would feel du­tifully obliged to en­joy heaven—or that some strange folk think that cre­at­ing or­gas­mium is, log­i­cally, the right­est thing to do.

Fun is okay. It’s al­lowed. It doesn’t get any bet­ter than fun.

And then we can turn our at­ten­tion to the ques­tion of what is fun, and how to have it.

Part of The Fun The­ory Sequence

Next post: “High Challenge

(start of se­quence)