Devil’s Offers

Pre­vi­ously in se­ries: Harm­ful Options

An iota of fic­tional ev­i­dence from The Golden Age by John C. Wright:

He­lion had leaned and said, “Son, once you go in there, the full pow­ers and to­tal com­mand struc­tures of the Rhadamanth Sophotech will be at your com­mand. You will be in­vested with godlike pow­ers; but you will still have the pas­sions and dis­tem­pers of a merely hu­man spirit. There are two temp­ta­tions which will threaten you. First, you will be tempted to re­move your hu­man weak­nesses by abrupt men­tal surgery. The In­var­i­ants do this, and to a lesser de­gree, so do the White Mano­ri­als, aban­don­ing hu­man­ity to es­cape from pain. Se­cond, you will be tempted to in­dulge your hu­man weak­ness. The Ca­cophiles do this, and to a lesser de­gree, so do the Black Mano­ri­als. Our so­ciety will gladly feed ev­ery sin and vice and im­pulse you might have; and then stand by hel­plessly and watch as you de­stroy your­self; be­cause the first law of the Golden Oe­c­umene is that no peace­ful ac­tivity is for­bid­den. Free men may freely harm them­selves, pro­vided only that it is only them­selves that they harm.”
Phaethon knew what his sire was in­ti­mat­ing, but he did not let him­self feel ir­ri­tated. Not to­day. To­day was the day of his ma­jor­ity, his eman­ci­pa­tion; to­day, he could for­give even He­lion’s in­ces­sant, nag­ging fears.
Phaethon also knew that most Rhadaman­thines were not per­mit­ted to face the Noetic tests un­til they were oc­to­gene­r­i­ans; most did not pass on their first at­tempt, or even their sec­ond. Many folk were not trusted with the full pow­ers of an adult un­til they reached their Cen­ten­nial. He­lion, de­spite crit­i­cism from the other Silver-Gray branches, was per­mit­ting Phaethon to face the tests five years early...

Then Phaethon said, “It’s a para­dox, Father. I can­not be, at the same time and in the same sense, a child and an adult. And, if I am an adult, I can­not be, at the same time, free to make my own suc­cesses, but not free to make my own mis­takes.”
He­lion looked sar­donic. “‘Mis­take’ is such a sim­ple word. An adult who suffers a mo­ment of fool­ish­ness or anger, one rash mo­ment, has time enough to delete or de­stroy his own free will, mem­ory, or judg­ment. No one is al­lowed to force a cure on him. No one can re­store his san­ity against his will. And so we all stand quietly by, with folded hands and cold eyes, and meekly watch good men an­nihilate them­selves. It is some­what… quaint… to call such a hor­rify­ing dis­aster a ‘mis­take.’”

Is this the best Fu­ture we could pos­si­bly get to—the Fu­ture where you must be ab­solutely stern and re­sis­tant through­out your en­tire life, be­cause one mo­ment of weak­ness is enough to be­tray you to over­whelming temp­ta­tion?

Such flawless perfec­tion would be easy enough for a su­per­in­tel­li­gence, per­haps—for a true adult—but for a hu­man, even a hun­dred-year-old hu­man, it seems like a dan­ger­ous and in­hos­pitable place to live. Even if you are strong enough to always choose cor­rectly—maybe you don’t want to have to be so strong, always at ev­ery mo­ment.

This is the great flaw in Wright’s oth­er­wise shin­ing Utopia—that the Sophotechs are helpfully offer­ing up over­whelming temp­ta­tions to peo­ple who would not be at quite so much risk from only them­selves. (Though if not for this flaw in Wright’s Utopia, he would have had no story...)

If I re­call cor­rectly, it was while read­ing The Golden Age that I gen­er­al­ized the prin­ci­ple “Offer­ing peo­ple pow­ers be­yond their own is not always helping them.

If you couldn’t just ask a Sophotech to edit your neu­ral net­works—and you couldn’t buy a stan­dard pack­age at the su­per­mar­ket—but, rather, had to study neu­ro­science your­self un­til you could do it with your own hands—then that would act as some­thing of a nat­u­ral limiter. Sure, there are plea­sure cen­ters that would be rel­a­tively easy to stim­u­late; but we don’t tell you where they are, so you have to do your own neu­ro­science. Or we don’t sell you your own neu­ro­surgery kit, so you have to build it your­self—metaphor­i­cally speak­ing, any­way—

But you see the idea: it is not so ter­rible a dis­re­spect for free will, to live in a world in which peo­ple are free to shoot their feet off through their own strength—in the hope that by the time they’re smart enough to do it un­der their own power, they’re smart enough not to.

The more dan­ger­ous and de­struc­tive the act, the more you re­quire peo­ple to do it with­out ex­ter­nal help. If it’s re­ally dan­ger­ous, you don’t just re­quire them to do their own en­g­ineer­ing, but to do their own sci­ence. A sin­gle­ton might be jus­tified in pro­hibit­ing stan­dard­ized text­books in cer­tain fields, so that peo­ple have to do their own sci­ence—make their own dis­cov­er­ies, learn to rule out their own stupid hy­pothe­ses, and fight their own over­con­fi­dence. Be­sides, ev­ery­one should ex­pe­rience the joy of ma­jor dis­cov­ery at least once in their life­time, and to do this prop­erly, you may have to pre­vent spoilers from en­ter­ing the pub­lic dis­course. So you’re get­ting three so­cial benefits at once, here.

But now I’m trailing off into plots for SF nov­els, in­stead of Fun The­ory per se. (It can be fun to muse how I would cre­ate the world if I had to or­der it ac­cord­ing to my own childish wis­dom, but in real life one rather prefers to avoid that sce­nario.)

As a mat­ter of Fun The­ory, though, you can imag­ine a bet­ter world than the Golden Oe­c­umene de­picted above—it is not the best world imag­in­able, fun-the­o­ret­i­cally speak­ing. We would pre­fer (if at­tain­able) a world in which peo­ple own their own mis­takes and their own suc­cesses, and yet they are not given loaded hand­guns on a silver plat­ter, nor do they per­ish through suicide by ge­nie bot­tle.

Once you imag­ine a world in which peo­ple can shoot off their own feet through their own strength, are you mak­ing that world in­cre­men­tally bet­ter by offer­ing in­cre­men­tal help along the way?

It’s one mat­ter to pro­hibit peo­ple from us­ing dan­ger­ous pow­ers that they have grown enough to ac­quire nat­u­rally—to liter­ally pro­tect them from them­selves. One ex­pects that if a mind kept get­ting smarter, at some eu­daimonic rate of in­tel­li­gence in­crease, then—if you took the most ob­vi­ous course—the mind would even­tu­ally be­come able to edit its own source code, and bliss it­self out if it chose to do so. Un­less the mind’s growth were steered onto a non-ob­vi­ous course, or mon­i­tors were man­dated to pro­hibit that event… To pro­tect peo­ple from their own pow­ers might take some twist­ing.

To de­scend from above and offer dan­ger­ous pow­ers as an un­timely gift, is an­other mat­ter en­tirely. That’s why the ti­tle of this post is “Devil’s Offers”, not “Danger­ous Choices”.

And to al­low dan­ger­ous pow­ers to be sold in a mar­ket­place—or al­ter­na­tively to pro­hibit them from be­ing trans­ferred from one mind to an­other—that is some­where in be­tween.

John C. Wright’s writ­ing has a par­tic­u­lar poignancy for me, for in my fool­ish youth I thought that some­thing very much like this sce­nario was a good idea—that a benev­olent su­per­in­tel­li­gence ought to go around offer­ing peo­ple lots of op­tions, and do­ing as it was asked.

In ret­ro­spect, this was a case of a per­ni­cious dis­tor­tion where you end up be­liev­ing things that are easy to mar­ket to other peo­ple.

I know some­one who drives across the coun­try on long trips, rather than fly­ing. Air travel scares him. Statis­tics, nat­u­rally, show that fly­ing a given dis­tance is much safer than driv­ing it. But some peo­ple fear too much the loss of con­trol that comes from not hav­ing their own hands on the steer­ing wheel. It’s a com­mon com­plaint.

The fu­ture sounds less scary if you imag­ine your­self hav­ing lots of con­trol over it. For ev­ery awful thing that you imag­ine hap­pen­ing to you, you can imag­ine, “But I won’t choose that, so it will be all right.”

And if it’s not your own hands on the steer­ing wheel, you think of scary things, and imag­ine, “What if this is cho­sen for me, and I can’t say no?”

But in real life rather than imag­i­na­tion, hu­man choice is a frag­ile thing. If the whole field of heuris­tics and bi­ases teaches us any­thing, it surely teaches us that. Nor has it been the ver­dict of ex­per­i­ment, that hu­mans cor­rectly es­ti­mate the flaws of their own de­ci­sion mechanisms.

I flinched away from that thought’s im­pli­ca­tions, not so much be­cause I feared su­per­in­tel­li­gent pa­ter­nal­ism my­self, but be­cause I feared what other peo­ple would say of that po­si­tion. If I be­lieved it, I would have to defend it, so I man­aged not to be­lieve it. In­stead I told peo­ple not to worry, a su­per­in­tel­li­gence would surely re­spect their de­ci­sions (and even be­lieved it my­self). A very per­ni­cious sort of self-de­cep­tion.

Hu­man gov­ern­ments are made up of hu­mans who are fool­ish like our­selves, plus they have poor in­cen­tives. Less skin in the game, and spe­cific hu­man brain­ware to be cor­rupted by wield­ing power. So we’ve learned the his­tor­i­cal les­son to be wary of ced­ing con­trol to hu­man bu­reau­crats and poli­ti­ci­ans. We may even be emo­tion­ally hard­wired to re­sent the loss of any­thing we per­ceive as power.

Which is just to say that peo­ple are bi­ased, by in­stinct, by an­thro­po­mor­phism, and by nar­row ex­pe­rience, to un­der­es­ti­mate how much they could po­ten­tially trust a su­per­in­tel­li­gence which lacks a hu­man’s cor­rup­tion cir­cuits, doesn’t eas­ily make cer­tain kinds of mis­takes, and has strong over­lap be­tween its mo­tives and your own in­ter­ests.

Do you trust your­self? Do you trust your­self to know when to trust your­self? If you’re deal­ing with a su­per­in­tel­li­gence kindly enough to care about you at all, rather than dis­assem­bling you for raw ma­te­ri­als, are you wise to sec­ond-guess its choice of who it thinks should de­cide? Do you think you have a su­pe­rior epistemic van­tage point here, or what?

Ob­vi­ously we should not trust all agents who claim to be trust­wor­thy—es­pe­cially if they are weak enough, rel­a­tive to us, to need our good­will. But I am quite ready to ac­cept that a benev­olent su­per­in­tel­li­gence may not offer cer­tain choices.

If you feel safer driv­ing than fly­ing, be­cause that way it’s your own hands on the steer­ing wheel, statis­tics be damned—

—then maybe it isn’t helping you, for a su­per­in­tel­li­gence to offer you the op­tion of driv­ing.

Grav­ity doesn’t ask you if you would like to float up out of the at­mo­sphere into space and die. But you don’t go around com­plain­ing that grav­ity is a tyrant, right? You can build a space­ship if you work hard and study hard. It would be a more dan­ger­ous world if your six-year-old son could do it in an hour us­ing string and card­board.

Part of The Fun The­ory Sequence

Next post: “Non­per­son Pred­i­cates

Pre­vi­ous post: “Harm­ful Op­tions