Visualizing Eutopia

Fol­lowup to: Not Tak­ing Over the World

“Heaven is a city 15,000 miles square or 6,000 miles around. One side is 245 miles longer than the length of the Great Wall of China. Walls sur­round­ing Heaven are 396,000 times higher than the Great Wall of China and eight times as thick. Heaven has twelve gates, three on each side, and has room for 100,000,000,000 souls. There are no slums. The en­tire city is built of di­a­mond ma­te­rial, and the streets are paved with gold. All in­hab­itants are hon­est and there are no locks, no courts, and no po­lice­men.”
-- Rev­erend Doc­tor Ge­orge Hawes, in a sermon

Yes­ter­day I asked my es­teemed co-blog­ger Robin what he would do with “un­limited power”, in or­der to re­veal some­thing of his char­ac­ter. Robin said that he would (a) be very care­ful and (b) ask for ad­vice. I asked him what ad­vice he would give him­self. Robin said it was a difficult ques­tion and he wanted to wait on con­sid­er­ing it un­til it ac­tu­ally hap­pened. So over­all he ran away from the ques­tion like a star­tled squir­rel.

The char­ac­ter thus re­vealed is a vir­tu­ous one: it shows com­mon sense. A lot of peo­ple jump af­ter the prospect of ab­solute power like it was a coin they found in the street.

When you think about it, though, it says a lot about hu­man na­ture that this is a difficult ques­tion. I mean—most agents with util­ity func­tions shouldn’t have such a hard time de­scribing their perfect uni­verse.

For a long time, I too ran away from the ques­tion like a star­tled squir­rel. First I claimed that su­per­in­tel­li­gences would in­evitably do what was right, re­lin­quish­ing moral re­spon­si­bil­ity in toto. After that, I pro­pounded var­i­ous schemes to shape a nice su­per­in­tel­li­gence, and let it de­cide what should be done with the world.

Not that there’s any­thing wrong with that. In­deed, this is still the plan. But it still meant that I, per­son­ally, was duck­ing the ques­tion.

Why? Be­cause I ex­pected to fail at an­swer­ing. Be­cause I thought that any at­tempt for hu­mans to vi­su­al­ize a bet­ter fu­ture was go­ing to end up re­ca­pitu­lat­ing the Rev­erend Doc­tor Ge­orge Hawes: apes think­ing, “Boy, if I had hu­man in­tel­li­gence I sure could get a lot more ba­nanas.”

But try­ing to get a bet­ter an­swer to a ques­tion out of a su­per­in­tel­li­gence, is a differ­ent mat­ter from en­tirely duck­ing the ques­tion your­self. The point at which I stopped duck­ing was the point at which I re­al­ized that it’s ac­tu­ally quite difficult to get a good an­swer to some­thing out of a su­per­in­tel­li­gence, while si­mul­ta­neously hav­ing liter­ally no idea how to an­swer your­self.

When you’re deal­ing with con­fus­ing and difficult ques­tions—as op­posed to those that are straight­for­ward but nu­mer­i­cally te­dious—it’s quite sus­pi­cious to have, on the one hand, a pro­ce­dure that ex­e­cutes to re­li­ably an­swer the ques­tion, and, on the other hand, no idea of how to an­swer it your­self.

If you could write a com­puter pro­gram that you knew would re­li­ably out­put a satis­fac­tory an­swer to “Why does any­thing ex­ist in the first place?” or “Why do I find my­self in a uni­verse giv­ing rise to ex­pe­riences that are or­dered rather than chaotic?”, then shouldn’t you be able to at least try ex­e­cut­ing the same pro­ce­dure your­self?

I sup­pose there could be some sec­tion of the pro­ce­dure where you’ve got to do a sep­til­lion op­er­a­tions and so you’ve just got no choice but to wait for su­per­in­tel­li­gence, but re­ally, that sounds rather sus­pi­cious in cases like these.

So it’s not that I’m plan­ning to use the out­put of my own in­tel­li­gence to take over the uni­verse. But I did re­al­ize at some point that it was too sus­pi­cious to en­tirely duck the ques­tion while try­ing to make a com­puter know­ably solve it. It didn’t even seem all that morally cau­tious, once I put in those terms. You can de­sign an ar­ith­metic chip us­ing purely ab­stract rea­son­ing, but would you be wise to never try an ar­ith­metic prob­lem your­self?

And when I did fi­nally try—well, that caused me to up­date in var­i­ous ways.

It does make a differ­ence to try do­ing ar­ith­metic your­self, in­stead of just try­ing to de­sign chips that do it for you. So I found.

Hence my bug­ging Robin about it.

For it seems to me that Robin asks too lit­tle of the fu­ture. It’s all very well to plead that you are only fore­cast­ing, but if you dis­play greater re­vul­sion to the idea of a Friendly AI than to the idea of ra­pa­cious hard­scrap­ple fron­tier folk...

I thought that Robin might be ask­ing too lit­tle, due to not vi­su­al­iz­ing any fu­ture in enough de­tail. Not the fu­ture but any fu­ture. I’d hoped that if Robin had al­lowed him­self to vi­su­al­ize his “perfect fu­ture” in more de­tail, rather than fo­cus­ing on all the com­pro­mises he thinks he has to make, he might see that there were fu­tures more de­sir­able than the ra­pa­cious hard­scrap­ple fron­tier folk.

It’s hard to see on an emo­tional level why a ge­nie might be a good thing to have, if you haven’t ac­knowl­edged any wishes that need grant­ing. It’s like not feel­ing the temp­ta­tion of cry­on­ics, if you haven’t thought of any­thing the Fu­ture con­tains that might be worth see­ing.

I’d also hoped to per­suade Robin, if his wishes were com­pli­cated enough, that there were at­tain­able good fu­tures that could not come about by let­ting things go their own way. So that he might be­gin to see the fu­ture as I do, as a dilemma be­tween ex­tremes: The de­fault, loss of con­trol, fol­lowed by a Null fu­ture con­tain­ing lit­tle or no util­ity. Ver­sus ex­tremely pre­cise steer­ing through “im­pos­si­ble” prob­lems to get to any sort of Good fu­ture what­so­ever.

This is mostly a mat­ter of ap­pre­ci­at­ing how even the de­sires we call “sim­ple” ac­tu­ally con­tain many bits of in­for­ma­tion. Get­ting past an­thro­po­mor­phic op­ti­mism, to re­al­ize that a Fu­ture not strongly steered by our util­ity func­tions is likely to con­tain lit­tle or no util­ity, for the same rea­son it’s hard to hit a dis­tant tar­get while shoot­ing blind­folded...

But if your “de­sired fu­ture” re­mains mostly un­speci­fied, that may en­courage too much op­ti­mism as well.