Beyond the Reach of God

To­day’s post is a tad gloomier than usual, as I mea­sure such things. It deals with a thought ex­per­i­ment I in­vented to smash my own op­ti­mism, af­ter I re­al­ized that op­ti­mism had mis­led me. Those read­ers sym­pa­thetic to ar­gu­ments like, “It’s im­por­tant to keep our bi­ases be­cause they help us stay happy,” should con­sider not read­ing. (Un­less they have some­thing to pro­tect, in­clud­ing their own life.)

So! Look­ing back on the mag­ni­tude of my own folly, I re­al­ized that at the root of it had been a dis­be­lief in the Fu­ture’s vuln­er­a­bil­ity—a re­luc­tance to ac­cept that things could re­ally turn out wrong. Not as the re­sult of any ex­plicit propo­si­tional ver­bal be­lief. More like some­thing in­side that per­sisted in be­liev­ing, even in the face of ad­ver­sity, that ev­ery­thing would be all right in the end.

Some would ac­count this a virtue (zettai daijobu da yo), and oth­ers would say that it’s a thing nec­es­sary for men­tal health.

But we don’t live in that world. We live in the world be­yond the reach of God.

It’s been a long, long time since I be­lieved in God. Grow­ing up in an Ortho­dox Jewish fam­ily, I can re­call the last re­mem­bered time I asked God for some­thing, though I don’t re­mem­ber how old I was. I was putting in some re­quest on be­half of the next-door-neigh­bor­ing boy, I for­get what ex­actly—some­thing along the lines of, “I hope things turn out all right for him,” or maybe “I hope he be­comes Jewish.”

I re­mem­ber what it was like to have some higher au­thor­ity to ap­peal to, to take care of things I couldn’t han­dle my­self. I didn’t think of it as “warm”, be­cause I had no al­ter­na­tive to com­pare it to. I just took it for granted.

Still I re­call, though only from dis­tant child­hood, what it’s like to live in the con­cep­tu­ally im­pos­si­ble pos­si­ble world where God ex­ists. Really ex­ists, in the way that chil­dren and ra­tio­nal­ists take all their be­liefs at face value.

In the world where God ex­ists, does God in­ter­vene to op­ti­mize ev­ery­thing? Re­gard­less of what rab­bis as­sert about the fun­da­men­tal na­ture of re­al­ity, the take-it-se­ri­ously op­er­a­tional an­swer to this ques­tion is ob­vi­ously “No”. You can’t ask God to bring you a lemon­ade from the re­friger­a­tor in­stead of get­ting one your­self. When I be­lieved in God af­ter the se­ri­ous fash­ion of a child, so very long ago, I didn’t be­lieve that.

Pos­tu­lat­ing that par­tic­u­lar di­v­ine in­ac­tion doesn’t pro­voke a full-blown the­olog­i­cal crisis. If you said to me, “I have con­structed a benev­olent su­per­in­tel­li­gent nan­otech-user”, and I said “Give me a ba­nana,” and no ba­nana ap­peared, this would not yet dis­prove your state­ment. Hu­man par­ents don’t always do ev­ery­thing their chil­dren ask. There are some de­cent fun-the­o­retic ar­gu­ments—I even be­lieve them my­self—against the idea that the best kind of help you can offer some­one, is to always im­me­di­ately give them ev­ery­thing they want. I don’t think that eu­daimo­nia is for­mu­lat­ing goals and hav­ing them in­stantly fulfilled; I don’t want to be­come a sim­ple want­ing-thing that never has to plan or act or think.

So it’s not nec­es­sar­ily an at­tempt to avoid falsifi­ca­tion, to say that God does not grant all prayers. Even a Friendly AI might not re­spond to ev­ery re­quest.

But clearly, there ex­ists some thresh­old of hor­ror awful enough that God will in­ter­vene. I re­mem­ber that be­ing true, when I be­lieved af­ter the fash­ion of a child.

The God who does not in­ter­vene at all, no mat­ter how bad things get—that’s an ob­vi­ous at­tempt to avoid falsifi­ca­tion, to pro­tect a be­lief-in-be­lief. Suffi­ciently young chil­dren don’t have the deep-down knowl­edge that God doesn’t re­ally ex­ist. They re­ally ex­pect to see a dragon in their garage. They have no rea­son to imag­ine a lov­ing God who never acts. Where ex­actly is the bound­ary of suffi­cient awful­ness? Even a child can imag­ine ar­gu­ing over the pre­cise thresh­old. But of course God will draw the line some­where. Few in­deed are the lov­ing par­ents who, de­siring their child to grow up strong and self-re­li­ant, would let their tod­dler be run over by a car.

The ob­vi­ous ex­am­ple of a hor­ror so great that God can­not tol­er­ate it, is death—true death, mind-an­nihila­tion. I don’t think that even Bud­dhism al­lows that. So long as there is a God in the clas­sic sense—full-blown, on­tolog­i­cally fun­da­men­tal, the God—we can rest as­sured that no suffi­ciently awful event will ever, ever hap­pen. There is no soul any­where that need fear true an­nihila­tion; God will pre­vent it.

What if you build your own simu­lated uni­verse? The clas­sic ex­am­ple of a simu­lated uni­verse is Con­way’s Game of Life. I do urge you to in­ves­ti­gate Life if you’ve never played it—it’s im­por­tant for com­pre­hend­ing the no­tion of “phys­i­cal law”. Con­way’s Life has been proven Tur­ing-com­plete, so it would be pos­si­ble to build a sen­tient be­ing in the Life uni­verse, albeit it might be rather frag­ile and awk­ward. Other cel­lu­lar au­tomata would make it sim­pler.

Could you, by cre­at­ing a simu­lated uni­verse, es­cape the reach of God? Could you simu­late a Game of Life con­tain­ing sen­tient en­tities, and tor­ture the be­ings therein? But if God is watch­ing ev­ery­where, then try­ing to build an un­fair Life just re­sults in the God step­ping in to mod­ify your com­puter’s tran­sis­tors. If the physics you set up in your com­puter pro­gram calls for a sen­tient Life-en­tity to be end­lessly tor­tured for no par­tic­u­lar rea­son, the God will in­ter­vene. God be­ing om­nip­re­sent, there is no re­fuge any­where for true hor­ror: Life is fair.

But sup­pose that in­stead you ask the ques­tion:

Given such-and-such ini­tial con­di­tions, and given such-and-such cel­lu­lar au­toma­ton rules, what would be the math­e­mat­i­cal re­sult?

Not even God can mod­ify the an­swer to this ques­tion, un­less you be­lieve that God can im­ple­ment log­i­cal im­pos­si­bil­ities. Even as a very young child, I don’t re­mem­ber be­liev­ing that. (And why would you need to be­lieve it, if God can mod­ify any­thing that ac­tu­ally ex­ists?)

What does Life look like, in this imag­i­nary world where ev­ery step fol­lows only from its im­me­di­ate pre­de­ces­sor? Where things only ever hap­pen, or don’t hap­pen, be­cause of the cel­lu­lar au­toma­ton rules? Where the ini­tial con­di­tions and rules don’t de­scribe any God that checks over each state? What does it look like, the world be­yond the reach of God?

That world wouldn’t be fair. If the ini­tial state con­tained the seeds of some­thing that could self-repli­cate, nat­u­ral se­lec­tion might or might not take place, and com­plex life might or might not evolve, and that life might or might not be­come sen­tient, with no God to guide the evolu­tion. That world might evolve the equiv­a­lent of con­scious cows, or con­scious dolphins, that lacked hands to im­prove their con­di­tion; maybe they would be eaten by con­scious wolves who never thought that they were do­ing wrong, or cared.

If in a vast plethora of wor­lds, some­thing like hu­mans evolved, then they would suffer from dis­eases—not to teach them any les­sons, but only be­cause viruses hap­pened to evolve as well, un­der the cel­lu­lar au­toma­ton rules.

If the peo­ple of that world are happy, or un­happy, the causes of their hap­piness or un­hap­piness may have noth­ing to do with good or bad choices they made. Noth­ing to do with free will or les­sons learned. In the what-if world where ev­ery step fol­lows only from the cel­lu­lar au­toma­ton rules, the equiv­a­lent of Genghis Khan can mur­der a mil­lion peo­ple, and laugh, and be rich, and never be pun­ished, and live his life much hap­pier than the av­er­age. Who pre­vents it? God would pre­vent it from ever ac­tu­ally hap­pen­ing, of course; He would at the very least visit some shade of gloom in the Khan’s heart. But in the math­e­mat­i­cal an­swer to the ques­tion What if? there is no God in the ax­ioms. So if the cel­lu­lar au­toma­ton rules say that the Khan is happy, that, sim­ply, is the whole and only an­swer to the what-if ques­tion. There is noth­ing, ab­solutely noth­ing, to pre­vent it.

And if the Khan tor­tures peo­ple hor­ribly to death over the course of days, for his own amuse­ment per­haps? They will call out for help, per­haps imag­in­ing a God. And if you re­ally wrote that cel­lu­lar au­toma­ton, God would in­ter­vene in your pro­gram, of course. But in the what-if ques­tion, what the cel­lu­lar au­toma­ton would do un­der the math­e­mat­i­cal rules, there isn’t any God in the sys­tem. Since the phys­i­cal laws con­tain no speci­fi­ca­tion of a util­ity func­tion—in par­tic­u­lar, no pro­hi­bi­tion against tor­ture—then the vic­tims will be saved only if the right cells hap­pen to be 0 or 1. And it’s not likely that any­one will defy the Khan; if they did, some­one would strike them with a sword, and the sword would dis­rupt their or­gans and they would die, and that would be the end of that. So the vic­tims die, scream­ing, and no one helps them; that is the an­swer to the what-if ques­tion.

Could the vic­tims be com­pletely in­no­cent? Why not, in the what-if world? If you look at the rules for Con­way’s Game of Life (which is Tur­ing-com­plete, so we can em­bed ar­bi­trary com­putable physics in there), then the rules are re­ally very sim­ple. Cells with three liv­ing neigh­bors stay al­ive; cells with two neigh­bors stay the same, all other cells die. There isn’t any­thing in there about only in­no­cent peo­ple not be­ing hor­ribly tor­tured for in­definite pe­ri­ods.

Is this world start­ing to sound fa­mil­iar?

Belief in a fair uni­verse of­ten man­i­fests in more sub­tle ways than think­ing that hor­rors should be out­right pro­hibited: Would the twen­tieth cen­tury have gone differ­ently, if Klara Pölzl and Alois Hitler had made love one hour ear­lier, and a differ­ent sperm fer­til­ized the egg, on the night that Adolf Hitler was con­ceived?

For so many lives and so much loss to turn on a sin­gle event, seems dis­pro­por­tionate. The Div­ine Plan ought to make more sense than that. You can be­lieve in a Div­ine Plan with­out be­liev­ing in God—Karl Marx surely did. You shouldn’t have mil­lions of lives de­pend­ing on a ca­sual choice, an hour’s timing, the speed of a micro­scopic flag­el­lum. It ought not to be al­lowed. It’s too dis­pro­por­tionate. There­fore, if Adolf Hitler had been able to go to high school and be­come an ar­chi­tect, there would have been some­one else to take his role, and World War II would have hap­pened the same as be­fore.

But in the world be­yond the reach of God, there isn’t any clause in the phys­i­cal ax­ioms which says “things have to make sense” or “big effects need big causes” or “his­tory runs on rea­sons too im­por­tant to be so frag­ile”. There is no God to im­pose that or­der, which is so severely vi­o­lated by hav­ing the lives and deaths of mil­lions de­pend on one small molec­u­lar event.

The point of the thought ex­per­i­ment is to lay out the God-uni­verse and the Na­ture-uni­verse side by side, so that we can rec­og­nize what kind of think­ing be­longs to the God-uni­verse. Many who are athe­ists, still think as if cer­tain things are not al­lowed. They would lay out ar­gu­ments for why World War II was in­evitable and would have hap­pened in more or less the same way, even if Hitler had be­come an ar­chi­tect. But in sober his­tor­i­cal fact, this is an un­rea­son­able be­lief; I chose the ex­am­ple of World War II be­cause from my read­ing, it seems that events were mostly driven by Hitler’s per­son­al­ity, of­ten in defi­ance of his gen­er­als and ad­vi­sors. There is no par­tic­u­lar em­piri­cal jus­tifi­ca­tion that I hap­pen to have heard of, for doubt­ing this. The main rea­son to doubt would be re­fusal to ac­cept that the uni­verse could make so lit­tle sense—that hor­rible things could hap­pen so lightly, for no more rea­son than a roll of the dice.

But why not? What pro­hibits it?

In the God-uni­verse, God pro­hibits it. To rec­og­nize this is to rec­og­nize that we don’t live in that uni­verse. We live in the what-if uni­verse be­yond the reach of God, driven by the math­e­mat­i­cal laws and noth­ing else. What­ever physics says will hap­pen, will hap­pen. Ab­solutely any­thing, good or bad, will hap­pen. And there is noth­ing in the laws of physics to lift this rule even for the re­ally ex­treme cases, where you might ex­pect Na­ture to be a lit­tle more rea­son­able.

Read­ing William Shirer’s The Rise and Fall of the Third Re­ich, listen­ing to him de­scribe the dis­be­lief that he and oth­ers felt upon dis­cov­er­ing the full scope of Nazi atroc­i­ties, I thought of what a strange thing it was, to read all that, and know, already, that there wasn’t a sin­gle pro­tec­tion against it. To just read through the whole book and ac­cept it; hor­rified, but not at all dis­be­liev­ing, be­cause I’d already un­der­stood what kind of world I lived in.

Once upon a time, I be­lieved that the ex­tinc­tion of hu­man­ity was not al­lowed. And oth­ers who call them­selves ra­tio­nal­ists, may yet have things they trust. They might be called “pos­i­tive-sum games”, or “democ­racy”, or “tech­nol­ogy”, but they are sa­cred. The mark of this sa­cred­ness is that the trust­wor­thy thing can’t lead to any­thing re­ally bad; or they can’t be per­ma­nently de­faced, at least not with­out a com­pen­satory silver lin­ing. In that sense they can be trusted, even if a few bad things hap­pen here and there.

The un­fold­ing his­tory of Earth can’t ever turn from its pos­i­tive-sum trend to a nega­tive-sum trend; that is not al­lowed. Democ­ra­ciesmod­ern liberal democ­ra­cies, any­way—won’t ever le­gal­ize tor­ture. Tech­nol­ogy has done so much good up un­til now, that there can’t pos­si­bly be a Black Swan tech­nol­ogy that breaks the trend and does more harm than all the good up un­til this point.

There are all sorts of clever ar­gu­ments why such things can’t pos­si­bly hap­pen. But the source of these ar­gu­ments is a much deeper be­lief that such things are not al­lowed. Yet who pro­hibits? Who pre­vents it from hap­pen­ing? If you can’t vi­su­al­ize at least one lawful uni­verse where physics say that such dread­ful things hap­pen—and so they do hap­pen, there be­ing nowhere to ap­peal the ver­dict—then you aren’t yet ready to ar­gue prob­a­bil­ities.

Could it re­ally be that sen­tient be­ings have died ab­solutely for thou­sands or mil­lions of years, with no soul and no af­ter­life—and not as part of any grand plan of Na­ture—not to teach any great les­son about the mean­ingful­ness or mean­ingless­ness of life—not even to teach any profound les­son about what is im­pos­si­ble—so that a trick as sim­ple and stupid-sound­ing as vit­rify­ing peo­ple in liquid ni­tro­gen can save them from to­tal an­nihila­tion—and a 10-sec­ond re­jec­tion of the silly idea can de­stroy some­one’s soul? Can it be that a com­puter pro­gram­mer who signs a few pa­pers and buys a life-in­surance policy con­tinues into the far fu­ture, while Ein­stein rots in a grave? We can be sure of one thing: God wouldn’t al­low it. Any­thing that ridicu­lous and dis­pro­por­tionate would be ruled out. It would make a mock­ery of the Div­ine Plan—a mock­ery of the strong rea­sons why things must be the way they are.

You can have sec­u­lar ra­tio­nal­iza­tions for things be­ing not al­lowed. So it helps to imag­ine that there is a God, benev­olent as you un­der­stand good­ness—a God who en­forces through­out Real­ity a min­i­mum of fair­ness and jus­tice—whose plans make sense and de­pend pro­por­tion­ally on peo­ple’s choices—who will never per­mit ab­solute hor­ror—who does not always in­ter­vene, but who at least pro­hibits uni­verses wrenched com­pletely off their track… to imag­ine all this, but also imag­ine that you, your­self, live in a what-if world of pure math­e­mat­ics—a world be­yond the reach of God, an ut­terly un­pro­tected world where any­thing at all can hap­pen.

If there’s any reader still read­ing this, who thinks that be­ing happy counts for more than any­thing in life, then maybe they shouldn’t spend much time pon­der­ing the un­pro­tect­ed­ness of their ex­is­tence. Maybe think of it just long enough to sign up them­selves and their fam­ily for cry­on­ics, and/​or write a check to an ex­is­ten­tial-risk-miti­ga­tion agency now and then. And wear a seat­belt and get health in­surance and all those other dreary nec­es­sary things that can de­stroy your life if you miss that one step… but aside from that, if you want to be happy, med­i­tat­ing on the frag­ility of life isn’t go­ing to help.

But this post was writ­ten for those who have some­thing to pro­tect.

What can a twelfth-cen­tury peas­ant do to save them­selves from an­nihila­tion? Noth­ing. Na­ture’s lit­tle challenges aren’t always fair. When you run into a challenge that’s too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That’s how it is for peo­ple, and it isn’t any differ­ent for planets. Some­one who wants to dance the deadly dance with Na­ture, does need to un­der­stand what they’re up against: Ab­solute, ut­ter, ex­cep­tion­less neu­tral­ity.

Know­ing this won’t always save you. It wouldn’t save a twelfth-cen­tury peas­ant, even if they knew. If you think that a ra­tio­nal­ist who fully un­der­stands the mess they’re in, must surely be able to find a way out—then you trust ra­tio­nal­ity, enough said.

Some com­menter is bound to cas­ti­gate me for putting too dark a tone on all this, and in re­sponse they will list out all the rea­sons why it’s lovely to live in a neu­tral uni­verse. Life is al­lowed to be a lit­tle dark, af­ter all; but not darker than a cer­tain point, un­less there’s a silver lin­ing.

Still, be­cause I don’t want to cre­ate need­less de­spair, I will say a few hope­ful words at this point:

If hu­man­ity’s fu­ture un­folds in the right way, we might be able to make our fu­ture light cone fair(er). We can’t mod­ify fun­da­men­tal physics, but on a higher level of or­ga­ni­za­tion we could build some guardrails and put down some padding; or­ga­nize the par­ti­cles into a pat­tern that does some in­ter­nal checks against catas­tro­phe. There’s a lot of stuff out there that we can’t touch—but it may help to con­sider ev­ery­thing that isn’t in our fu­ture light cone, as be­ing part of the “gen­er­al­ized past”. As if it had all already hap­pened. There’s at least the prospect of defeat­ing neu­tral­ity, in the only fu­ture we can touch—the only world that it ac­com­plishes some­thing to care about.

Some­day, maybe, im­ma­ture minds will re­li­ably be sheltered. Even if chil­dren go through the equiv­a­lent of not get­ting a lol­lipop, or even burn­ing a finger, they won’t ever be run over by cars.

And the adults wouldn’t be in so much dan­ger. A su­per­in­tel­li­gence—a mind that could think a trillion thoughts with­out a mis­step—would not be in­timi­dated by a challenge where death is the price of a sin­gle failure. The raw uni­verse wouldn’t seem so harsh, would be only an­other prob­lem to be solved.

The prob­lem is that build­ing an adult is it­self an adult challenge. That’s what I fi­nally re­al­ized, years ago.

If there is a fair(er) uni­verse, we have to get there start­ing from this world—the neu­tral world, the world of hard con­crete with no padding, the world where challenges are not cal­ibrated to your skills.

Not ev­ery child needs to stare Na­ture in the eyes. Buck­ling a seat­belt, or writ­ing a check, is not that com­pli­cated or deadly. I don’t say that ev­ery ra­tio­nal­ist should med­i­tate on neu­tral­ity. I don’t say that ev­ery ra­tio­nal­ist should think all these un­pleas­ant thoughts. But any­one who plans on con­fronting an un­cal­ibrated challenge of in­stant death, must not avoid them.

What does a child need to do—what rules should they fol­low, how should they be­have—to solve an adult prob­lem?