High Challenge

There’s a class of prophecy that runs: “In the Fu­ture, ma­chines will do all the work. Every­thing will be au­to­mated. Even la­bor of the sort we now con­sider ‘in­tel­lec­tual’, like en­g­ineer­ing, will be done by ma­chines. We can sit back and own the cap­i­tal. You’ll never have to lift a finger, ever again.”

But then won’t peo­ple be bored?

No; they can play com­puter games—not like our games, of course, but much more ad­vanced and en­ter­tain­ing.

Yet wait! If you buy a mod­ern com­puter game, you’ll find that it con­tains some tasks that are—there’s no kind word for this—effort­ful. (I would even say “difficult”, with the un­der­stand­ing that we’re talk­ing about some­thing that takes 10 min­utes, not 10 years.)

So in the fu­ture, we’ll have pro­grams that help you play the game—tak­ing over if you get stuck on the game, or just bored; or so that you can play games that would oth­er­wise be too ad­vanced for you.

But isn’t there some wasted effort, here? Why have one pro­gram­mer work­ing to make the game harder, and an­other pro­gram­mer to work­ing to make the game eas­ier? Why not just make the game eas­ier to start with? Since you play the game to get gold and ex­pe­rience points, mak­ing the game eas­ier will let you get more gold per unit time: the game will be­come more fun.

So this is the ul­ti­mate end of the prophecy of tech­nolog­i­cal progress—just star­ing at a screen that says “YOU WIN”, for­ever.

And maybe we’ll build a robot that does that, too.

Then what?

The world of ma­chines that do all the work—well, I don’t want to say it’s “analo­gous to the Chris­tian Heaven” be­cause it isn’t su­per­nat­u­ral; it’s some­thing that could in prin­ci­ple be re­al­ized. Reli­gious analo­gies are far too eas­ily tossed around as ac­cu­sa­tions… But, with­out im­ply­ing any other similar­i­ties, I’ll say that it seems analo­gous in the sense that eter­nal laz­i­ness “sounds like good news” to your pre­sent self who still has to work.

And as for play­ing games, as a sub­sti­tute—what is a com­puter game ex­cept syn­thetic work? Isn’t there a wasted step here? (And com­puter games in their pre­sent form, con­sid­ered as work, have var­i­ous as­pects that re­duce stress and in­crease en­gage­ment; but they also carry costs in the form of ar­tifi­cial­ity and iso­la­tion.)

I some­times think that fu­tur­is­tic ideals phrased in terms of “get­ting rid of work” would be bet­ter re­for­mu­lated as “re­mov­ing low-qual­ity work to make way for high-qual­ity work”.

There’s a broad class of goals that aren’t suit­able as the long-term mean­ing of life, be­cause you can ac­tu­ally achieve them, and then you’re done.

To look at it an­other way, if we’re look­ing for a suit­able long-run mean­ing of life, we should look for goals that are good to pur­sue and not just good to satisfy.

Or to phrase that some­what less para­dox­i­cally: We should look for val­u­a­tions that are over 4D states, rather than 3D states. Valuable on­go­ing pro­cesses, rather than “make the uni­verse have prop­erty P and then you’re done”.

Ti­mothy Fer­ris is again worth quot­ing: To find hap­piness, “the ques­tion you should be ask­ing isn’t ‘What do I want?’ or ‘What are my goals?’ but ‘What would ex­cite me?’”

You might say that for a long-run mean­ing of life, we need games that are fun to play and not just to win.

Mind you—some­times you do want to win. There are le­gi­t­i­mate goals where win­ning is ev­ery­thing. If you’re talk­ing, say, about cur­ing can­cer, then the suffer­ing ex­pe­rienced by even a sin­gle can­cer pa­tient out­weighs any fun that you might have in solv­ing their prob­lems. If you work at cre­at­ing a can­cer cure for twenty years through your own efforts, learn­ing new knowl­edge and new skill, mak­ing friends and al­lies—and then some alien su­per­in­tel­li­gence offers you a can­cer cure on a silver plat­ter for thirty bucks—then you shut up and take it.

But “cur­ing can­cer” is a prob­lem of the 3D-pred­i­cate sort: you want the no-can­cer pred­i­cate to go from False in the pre­sent to True in the fu­ture. The im­por­tance of this des­ti­na­tion far out­weighs the jour­ney; you don’t want to go there, you just want to be there. There are many le­gi­t­i­mate goals of this sort, but they are not suit­able as long-run fun. “Cure can­cer!” is a worth­while ac­tivity for us to pur­sue here and now, but it is not a plau­si­ble fu­ture goal of galac­tic civ­i­liza­tions.

Why should this “valuable on­go­ing pro­cess” be a pro­cess of try­ing to do things—why not a pro­cess of pas­sive ex­pe­rienc­ing, like the Bud­dhist Heaven?

I con­fess I’m not en­tirely sure how to set up a “pas­sively ex­pe­rienc­ing” mind. The hu­man brain was de­signed to perform var­i­ous sorts of in­ter­nal work that add up to an ac­tive in­tel­li­gence; even if you lie down on your bed and ex­ert no par­tic­u­lar effort to think, the thoughts that go on through your mind are ac­tivi­ties of brain ar­eas that are de­signed to, you know, solve prob­lems.

How much of the hu­man brain could you elimi­nate, apart from the plea­sure cen­ters, and still keep the sub­jec­tive ex­pe­rience of plea­sure?

I’m not go­ing to touch that one. I’ll stick with the much sim­pler an­swer of “I wouldn’t ac­tu­ally pre­fer to be a pas­sive ex­pe­riencer.” If I wanted Nir­vana, I might try to figure out how to achieve that im­pos­si­bil­ity. But once you strip away Bud­dha tel­ling me that Nir­vana is the end-all of ex­is­tence, Nir­vana seems rather more like “sounds like good news in the mo­ment of first be­ing told” or “ide­olog­i­cal be­lief in de­sire” rather than, y’know, some­thing I’d ac­tu­ally want.

The rea­son I have a mind at all, is that nat­u­ral se­lec­tion built me to do things—to solve cer­tain kinds of prob­lems.

“Be­cause it’s hu­man na­ture” is not an ex­plicit jus­tifi­ca­tion for any­thing. There is hu­man na­ture, which is what we are; and there is hu­mane na­ture, which is what, be­ing hu­man, we wish we were.

But I don’t want to change my na­ture to­ward a more pas­sive ob­ject—which is a jus­tifi­ca­tion. A happy blob is not what, be­ing hu­man, I wish to be­come.

I ear­lier ar­gued that many val­ues re­quire both sub­jec­tive hap­piness and the ex­ter­nal ob­jects of that hap­piness. That you can le­gi­t­i­mately have a util­ity func­tion that says, “It mat­ters to me whether or not the per­son I love is a real hu­man be­ing or just a highly re­al­is­tic non­sen­tient chat­bot, even if I don’t know, be­cause that-which-I-value is not my own state of mind, but the ex­ter­nal re­al­ity.” So that you need both the ex­pe­rience of love, and the real lover.

You can similarly have valuable ac­tivi­ties that re­quire both real challenge and real effort.

Rac­ing along a track, it mat­ters that the other rac­ers are real, and that you have a real chance to win or lose. (We’re not talk­ing about phys­i­cal de­ter­minism here, but whether some ex­ter­nal op­ti­miza­tion pro­cess ex­plic­itly chose for you to win the race.)

And it mat­ters that you’re rac­ing with your own skill at run­ning and your own willpower, not just press­ing a but­ton that says “Win”. (Though, since you never de­signed your own leg mus­cles, you are rac­ing us­ing strength that isn’t yours. A race be­tween robot cars is a purer con­test of their de­sign­ers. There is plenty of room to im­prove on the hu­man con­di­tion.)

And it mat­ters that you, a sen­tient be­ing, are ex­pe­rienc­ing it. (Rather than some non­sen­tient pro­cess car­ry­ing out a skele­ton imi­ta­tion of the race, trillions of times per sec­ond.)

There must be the true effort, the true vic­tory, and the true ex­pe­rience—the jour­ney, the des­ti­na­tion and the trav­eler.