Let’s Read: an essay on AI Theology

Fol­low me as I read through an es­say on AI The­ol­ogy and give my com­ments. Good Shep­herds (O’Gie­blyn, 2019)

It’s not a hu­man move,” said one former cham­pion. “I’ve never seen a hu­man play this move.” Even AlphaGo’s cre­ator could not ex­plain the al­gorithm’s choice. But it proved de­ci­sive.

Obli­ga­tory refer­ence to AlphaGo. Could have men­tioned AlphaZero, but it was not nearly as pop­u­lar as AlphaGo.

While Elon Musk and Bill Gates maun­der on about AI night­mare sce­nar­ios—self-repli­ca­tion, the sin­gu­lar­ity—the smartest crit­ics rec­og­nize that the im­me­di­ate threat these ma­chines pose is not ex­is­ten­tial but episte­molog­i­cal.

By “im­me­di­ate” means “now”. Rea­son­able. AI apoc­a­lypse is not sched­uled un­til per­haps 100 years later.

Noah Harari has ar­gued that the re­li­gion of “Dataism” will soon un­der­mine the foun­da­tions of liber­al­ism. “Just as ac­cord­ing to Chris­ti­an­ity we hu­mans can­not un­der­stand God and His plan,” he writes, “so Dataism de­clares that the hu­man brain can­not fathom the new mas­ter al­gorithms.”… Our role as hu­mans is not to ques­tion the al­gorith­mic logic but to sub­mit to it.

“Dataism” does not nec­es­sar­ily un­der­mine liber­al­ism. It is pos­si­ble to aug­ment hu­man thought to keep up with the data del­uge, and pre­serve (trans)hu­man liber­al­ism. Although this is not guaran­teed.

Job as­sumes the role of a pros­e­cut­ing at­tor­ney and de­mands a cos­mic ex­pla­na­tion for his suffer­ing. God du­tifully ap­pears in court, but only to hu­mil­i­ate Job with a dis­play of di­v­ine supremacy. Where were you when I laid the foun­da­tion of the earth? he thun­ders, then poses a litany of ques­tions that no hu­man can pos­si­bly an­swer. Job is so flum­moxed, he de­nounces his own abil­ity to rea­son. “There­fore I have de­clared that which I did not un­der­stand,” he says, “Things too won­der­ful for me, which I did not know.”

The prob­lem of Theod­icy is amus­ing. And the tri­als of God can be to­tal dark hu­mor. My fa­vorite is

In a con­cen­tra­tion camp, one evening af­ter work, a rabbi called to­gether three of his col­leagues and con­voked a spe­cial court. Stand­ing with his head held high be­fore them, he spoke as fol­lows: “I in­tend to con­vict God of mur­der, for he is de­stroy­ing his peo­ple and the law he gave to them … I have ir­refutable proof in my hands. Judge with­out fear or sor­row or prej­u­dice. What­ever you have to lose has long since been taken away.” The trial pro­ceeded in due le­gal form, with wit­nesses for both sides with pleas and de­liber­a­tions. The unan­i­mous ver­dict: “Guilty.”

Back to the es­say.

Through­out the Mid­dle Ages, Chris­ti­ans viewed him in quite a differ­ent light. The­ol­ogy was still in­flected with Pla­ton­ism and rested on the premise that both God and the nat­u­ral world were com­pre­hen­si­ble to hu­man rea­son. “All things among them­selves pos­sess an or­der, and this is the form that makes the uni­verse like God,”… It wasn’t un­til the four­teenth cen­tury that the­olo­gians be­gan to ar­gue that God was not limited by ra­tio­nal laws; he was free to com­mand what­ever he wanted, and what­ever he de­creed be­came vir­tu­ous sim­ply be­cause he de­creed it. This new doc­trine, nom­i­nal­ism, reached its apotheo­sis in the work of the Re­form­ers. Like Calvin, Martin Luther be­lieved that God’s will was in­com­pre­hen­si­ble. Div­ine jus­tice, he wrote, is “en­tirely alien to our­selves.”

I do think that if a God made this world, then yes, They are clearly en­tirely alien and prob­a­bly more in­ter­ested in quan­tum me­chan­ics than jus­tice. Also, one should note that the Re­for­ma­tion prob­a­bly didn’t help or sup­press sci­ence. See His­tory: Science and the Re­for­ma­tion (David Woot­ton, 2017).

I of­ten felt my­self to be an ant in a net­work of vast global struc­tures—the mar­ket, tech­nol­ogy—that ex­ceeded my pow­ers of com­pre­hen­sion. To my mind, even con­tem­po­rary physics (the lit­tle I’d read), with its hy­pothe­ses on mul­ti­verses and other di­men­sions, echoed Calvin’s view that our bod­ies were faulty in­stru­ments ill-equipped to un­der­stand the ab­solute.

No need to in­voke the more spec­u­la­tive parts. Just the ver­ified Stan­dard Model is strange enough. Or the end­less tech­nol­ogy stacks in the elec­tron­ics...

The same year he re­ceived his sen­tence, a ProPublica re­port found that the soft­ware was far more likely to in­cor­rectly as­sign higher re­ci­di­vism rates to black defen­dants than to white defen­dants. The al­gorithm suffers from a prob­lem that has be­come in­creas­ingly com­mon in these mod­els—and that is, in fact, in­her­ent to them. Be­cause the al­gorithms are trained on his­tor­i­cal data (for ex­am­ple, past court de­ci­sions), their out­comes of­ten re­flect hu­man bi­ases.

Two prob­lems.

  • There are many stan­dards of statis­ti­cal fair­ness, and they are in­com­pat­i­ble. COMPAS is un­fair in one statis­ti­cal sense but (al­most) fair in an­other statis­ti­cal sense. The rea­son is ex­tremely sim­ple and in­volves base rate.

  • Essen­tially, the charges of un­fair­ness boils down to “Base rates (Blacks re­cide more than Whites) are bi­ases, and bi­ases are morally wrong.”

  • Hu­man judges do “train­ing on his­tor­i­cal data” too, and call it “com­mon law”. From what I know, English law is a gi­ant tar­ball of his­tory and re­ally hard to change, just like an old code base. De­spite this, it’s still used and has been defended for cen­turies as re­ally good for some rea­sons. I won­der if the defences of the case law sys­tem could be used against the au­thor?

From what I’ve ob­served, “bias” and “prior” are dis­t­in­guished en­tirely by moral judg­ments. “Bias” is im­moral base rate be­lief. “Prior” is moral. As such, whether a prior is thought as “bias” or “prior” can eas­ily be ma­nipu­lated by fram­ing a base rate be­lief in moral lan­guage.

Ac­cord­ing to re­search by Haidt, moral feel­ings are 6 kinds:

  1. Care /​ Harm

  2. Fair­ness /​ Cheating

  3. Liberty /​ Oppression

  4. Loy­alty /​ Betrayal

  5. Author­ity /​ Subversion

  6. Sanc­tity /​ Degradation

For the dis­tinc­tion be­tween “bias” and “prior”, the most rele­vant kinds are the first three. For ex­am­ple, the per­sis­tent de­nial of the effec­tive­ness of IQ tests is mo­ti­vated rea­son­ing, based on moral re­jec­tion about how it could be used to jus­tify op­pres­sion of low-IQ peo­ple, un­fairly al­low­ing high-IQ chil­dren to get into elite schools, and cause harm of many kinds. By this moral taint­ing, any prior based on IQ test re­sults be­comes im­moral prior, thus “bias”.

For more on the mor­al­iza­tion of base rate be­liefs, see for ex­am­ple The base rate prin­ci­ple and the fair­ness prin­ci­ple in so­cial judg­ment (J Cao, MR Banaji, 2016) and The Psy­chol­ogy of the Un­think­able (Tet­lock et al, 2000).

Or just con­tem­plate how strange it is that sex­ual ori­en­ta­tion is not dis­crim­i­na­tion, but friend­ship ori­en­ta­tion might be. I can only be friends with fe­males, and a male ac­quain­tance (who re­ally wants to be my friend) once won­dered if that’s dis­crim­i­na­tion.

Some have de­vel­oped new meth­ods that work in re­verse to suss out data points that may have trig­gered the ma­chine’s de­ci­sions. But these ex­pla­na­tions are, at best, in­tel­li­gent guesses. (Sam Ritchie, a former soft­ware en­g­ineer at Stripe, prefers the term nar­ra­tives, since the ex­pla­na­tions are not a step-by-step break­down of the al­gorithm’s de­ci­sion-mak­ing pro­cess but a hy­poth­e­sis about rea­son­ing tac­tics it may have used.)

A toy ex­am­ple is how WolframAlpha man­ages to show you how to solve an in­te­gral “step by step”. What it ac­tu­ally does is to in­ter­nally use a gen­eral al­gorithm that’s too hard for hu­mans to un­der­stand, then sep­a­rately use an ex­pert sys­tem that looks at the re­sult and the prob­lem, and try to make up a chain of in­te­gra­tion tricks that a hu­man could prob­a­bly have thought up.

Hu­mans are also prone to such guess­ing. When they in­tro­spect their de­ci­sion pro­cess, they might say that they sim­ply knew how they ar­rived at the de­ci­sion, but in re­al­ity they are try­ing to in­fer it us­ing folk the­o­ries of psy­chol­ogy. This is the les­son from (Nis­bett, 1977) or The un­bear­able au­to­mat­ic­ity of be­ing (Bargh & Char­trand, 1999).

As Yu­val Noah Harari points out in his book Homo Deus, hu­man­ism has always rested on the premise that peo­ple know what’s best for them­selves and can make ra­tio­nal de­ci­sions about their lives by listen­ing to their “in­ner voice.” If we de­cide that al­gorithms are bet­ter than we are at pre­dict­ing our own de­sires, it will com­pro­mise not only our au­ton­omy but also the larger as­sump­tion that in­di­vi­d­ual feel­ings and con­vic­tions are the ul­ti­mate source of truth. … “Whereas hu­man­ism com­manded: ‘Listen to your feel­ings!’” Harari ar­gues, “Dataism now com­mands: ‘Listen to the al­gorithms! They know how you feel.’”

The in­ner voice, as noted above, is a so­cial voice. It ex­ists to ex­plain the ac­tions of a hu­man to other hu­mans. It does not per­ceive ac­cu­rately. An aso­cial species prob­a­bly has no in­ner voice, for there is no benefit of evolv­ing that.

If this Dataism pre­dic­tion comes to pass, then the in­ner voice would sim­ply come from the out­side. Like, I would think, “What do I like to do to­day?” [datas­tream comes from some hid­den de­ci­sion mod­ule lo­cated some­where over­seas] “Oh yes, write an es­say!”

In­stead of ex­pe­rienc­ing a kind of “me listen­ing to the clever robot”, it would be like “me listen­ing to me”, ex­cept the “me” would be weird and spill out­side the skull.

The effect would be the same, but the first raises the moral alarm: it has the po­ten­tial to be­come im­moral by the “Liberty /​ Op­pres­sion” rule.

Kaczyn­ski ar­gues [in Un­abomber Man­i­festo] that the com­mon sci-fi sce­nar­ios of ma­chine re­bel­lion are off base; rather, hu­mans will slowly drift into a de­pen­dence on ma­chines, giv­ing over power bit by bit in grad­ual ac­quies­cence. “As so­ciety and the prob­lems that face it be­come more and more com­plex and ma­chines be­come more and more in­tel­li­gent,” he pre­dicts, “peo­ple will let ma­chines make more of their de­ci­sions for them, sim­ply be­cause ma­chine-made de­ci­sions will bring bet­ter re­sults than man-made ones.”

Kaczyn­ski writes with the pre­ci­sion of a math­e­mat­i­cian (he did com­plex anal­y­sis back in school), and his man­i­festo sets out his prim­i­tive hu­man­ist philos­o­phy clearly.

This vi­sion seems un­likely to me, sim­ply be­cause it is too be­nign. Peo­ple might adapt and sur­vive, but I doubt they would be hu­mans. I be­lieve the fu­ture is vom­it­tingly strange.

Kurzweil, who is now a di­rec­tor of en­g­ineer­ing at Google, claims he agrees with much of Kaczyn­ski’s man­i­festo but that he parts ways with him in one cru­cial area. While Kaczyn­ski feared these tech­nolo­gies were not worth the gam­ble, Kurzweil be­lieves they are: “Although the risks are quite real, my fun­da­men­tal be­lief is that the po­ten­tial gains are worth the risk.”

Science, as Fromm said, can­not give hu­man­ity the sense of mean­ing it ul­ti­mately craves. He be­lieved that eth­i­cal and moral sys­tems must be rooted in hu­man na­ture and hu­man in­ter­ests, and that sci­ence, which searches for ob­jec­tive re­al­ity, is merely a way of evad­ing this re­spon­si­bil­ity.

There is a third way. Science can re­veal what hu­mans feel as mean­ingful, and us­ing that, mean­ing can be mass-pro­duced at an af­ford­able price. Pos­i­tive psy­chol­ogy, for ex­am­ple, has shown that there are three cores of the feel­ing of mean­ing (F Martela, MF Ste­ger, 2016):

  • Co­her­ence: is what I ob­serve about the world un­der­stand­able?

  • Pur­pose: is there a val­ued and clearly defined goal for the world and me?

  • Sig­nifi­cance: can I make a differ­ence in achiev­ing that goal?

It has also, in­ci­den­tally, found that peo­ple feel like their life is pretty mean­ingful (SJ Heintzel­man, LA King, 2014).

The prob­lem of “Yes, it is the feel­ing of mean­ing, but is it re­ally mean­ing?” is not of prac­ti­cal sig­nifi­cance. Pre­sum­ably hu­mans, by their habit of self-de­nial (they hate to be­come pre­dictable, even to them­selves), would rebel against guaran­teed mean­ing if they rec­og­nize it. That ap­pears un­likely (most peo­ple wel­come guaran­teed health and shelter as a hu­man right), but if it does hap­pen, the mean­ing man­u­fac­tur­ing in­dus­try can sim­ply be­come in­visi­ble and em­ploy artists who sub­scribe to ex­is­ten­tial­ism (mean­ing can only be con­structed).

a truly hu­man­is­tic faith de­mands a de­ity with such limits. This doc­trine re­quires that hu­mans re­lin­quish their need for cer­tainty—and for an in­tel­li­gence who can provide defini­tive an­swers—and in­stead ac­cept life as an ir­re­ducible mys­tery. If sci­ence per­sists in its quest for su­per­in­tel­li­gence, it might learn much from this tra­di­tion.

Typ­i­cal of these es­says to in­sert a mean­ingful con­clu­sion at the end. It does not oc­cur to the au­thor that they could also ac­cept the al­gorithms as ir­re­ducible mys­ter­ies.

I pre­fer posthu­man­ism. I do not have much sym­pa­thy with hu­man­ists’ rigid at­tempts to cir­cum­scribe what is hu­man, any­way. If I some­how end up be­com­ing posthu­man, okay.