The Logical Fallacy of Generalization from Fictional Evidence

When I try to in­tro­duce the sub­ject of ad­vanced AI, what’s the first thing I hear, more than half the time?

“Oh, you mean like the Ter­mi­na­tor movies /​ The Ma­trix /​ Asi­mov’s robots!”

And I re­ply, “Well, no, not ex­actly. I try to avoid the log­i­cal fal­lacy of gen­er­al­iz­ing from fic­tional ev­i­dence.”

Some peo­ple get it right away, and laugh. Others defend their use of the ex­am­ple, dis­agree­ing that it’s a fal­lacy.

What’s wrong with us­ing movies or nov­els as start­ing points for the dis­cus­sion? No one’s claiming that it’s true, af­ter all. Where is the lie, where is the ra­tio­nal­ist sin? Science fic­tion rep­re­sents the au­thor’s at­tempt to vi­su­al­ize the fu­ture; why not take ad­van­tage of the think­ing that’s already been done on our be­half, in­stead of start­ing over?

Not ev­ery mis­step in the pre­cise dance of ra­tio­nal­ity con­sists of out­right be­lief in a false­hood; there are sub­tler ways to go wrong.

First, let us dis­pose of the no­tion that sci­ence fic­tion rep­re­sents a full-fledged ra­tio­nal at­tempt to fore­cast the fu­ture. Even the most dili­gent sci­ence fic­tion writ­ers are, first and fore­most, sto­ry­tel­lers; the re­quire­ments of sto­ry­tel­ling are not the same as the re­quire­ments of fore­cast­ing. As Nick Bostrom points out:1

When was the last time you saw a movie about hu­mankind sud­denly go­ing ex­tinct (with­out warn­ing and with­out be­ing re­placed by some other civ­i­liza­tion)? While this sce­nario may be much more prob­a­ble than a sce­nario in which hu­man heroes suc­cess­fully re­pel an in­va­sion of mon­sters or robot war­riors, it wouldn’t be much fun to watch.

So there are spe­cific dis­tor­tions in fic­tion.2 But try­ing to cor­rect for these spe­cific dis­tor­tions is not enough. A story is never a ra­tio­nal at­tempt at anal­y­sis, not even with the most dili­gent sci­ence fic­tion writ­ers, be­cause sto­ries don’t use prob­a­bil­ity dis­tri­bu­tions. I illus­trate as fol­lows:

Bob Merkelthud slid cau­tiously through the door of the alien space­craft, glanc­ing right and then left (or left and then right) to see whether any of the dreaded Space Mon­sters yet re­mained. At his side was the only weapon that had been found effec­tive against the Space Mon­sters, a Space Sword forged of pure tita­nium with 30% prob­a­bil­ity, an or­di­nary iron crow­bar with 20% prob­a­bil­ity, and a shim­mer­ing black dis­cus found in the smok­ing ru­ins of Stone­henge with 45% prob­a­bil­ity, the re­main­ing 5% be­ing dis­tributed over too many minor out­comes to list here.

Merklethud (though there’s a sig­nifi­cant chance that Su­san Wiffle­foofer was there in­stead) took two steps for­ward or one step back, when a vast roar split the silence of the black air­lock! Or the quiet back­ground hum of the white air­lock! Although Am­fer and Woofi (1997) ar­gue that Merklethud is de­voured at this point, Spack­le­backle (2003) points out that—

Char­ac­ters can be ig­no­rant, but the au­thor can’t say the three magic words “I don’t know.” The pro­tag­o­nist must thread a sin­gle line through the fu­ture, full of the de­tails that lend flesh to the story, from Wiffle­foofer’s ap­pro­pri­ately fu­tur­is­tic at­ti­tudes to­ward fem­i­nism, down to the color of her ear­rings.

Then all these bur­den­some de­tails and ques­tion­able as­sump­tions are wrapped up and given a short la­bel, cre­at­ing the illu­sion that they are a sin­gle pack­age.3

On prob­lems with large an­swer spaces, the great­est difficulty is not ver­ify­ing the cor­rect an­swer but sim­ply lo­cat­ing it in an­swer space to be­gin with. If some­one starts out by ask­ing whether or not AIs are gonna put us into cap­sules like in The Ma­trix, they’re jump­ing to a 100-bit propo­si­tion, with­out a cor­re­spond­ing 98 bits of ev­i­dence to lo­cate it in the an­swer space as a pos­si­bil­ity wor­thy of ex­plicit con­sid­er­a­tion. It would only take a hand­ful more ev­i­dence af­ter the first 98 bits to pro­mote that pos­si­bil­ity to near-cer­tainty, which tells you some­thing about where nearly all the work gets done.

The “pre­limi­nary” step of lo­cat­ing pos­si­bil­ities wor­thy of ex­plicit con­sid­er­a­tion in­cludes steps like: weigh­ing what you know and don’t know, what you can and can’t pre­dict; mak­ing a de­liber­ate effort to avoid ab­sur­dity bias and widen con­fi­dence in­ter­vals; pon­der­ing which ques­tions are the im­por­tant ones, try­ing to ad­just for pos­si­ble Black Swans and think of (formerly) un­known un­knowns. Jump­ing to “The Ma­trix: Yes or No?” skips over all of this.

Any pro­fes­sional ne­go­tia­tor knows that to con­trol the terms of a de­bate is very nearly to con­trol the out­come of the de­bate. If you start out by think­ing of The Ma­trix, it brings to mind march­ing robot armies defeat­ing hu­mans af­ter a long strug­gle—not a su­per­in­tel­li­gence snap­ping nan­otech­nolog­i­cal fingers. It fo­cuses on an “Us vs. Them” strug­gle, di­rect­ing at­ten­tion to ques­tions like “Who will win?” and “Who should win?” and “Will AIs re­ally be like that?” It cre­ates a gen­eral at­mo­sphere of en­ter­tain­ment, of “What is your amaz­ing vi­sion of the fu­ture?”

Lost to the echo­ing empti­ness are: con­sid­er­a­tions of more than one pos­si­ble mind de­sign that an “ar­tifi­cial in­tel­li­gence” could im­ple­ment; the fu­ture’s de­pen­dence on ini­tial con­di­tions; the power of smarter-than-hu­man in­tel­li­gence and the ar­gu­ment for its un­pre­dictabil­ity; peo­ple tak­ing the whole mat­ter se­ri­ously and try­ing to do some­thing about it.

If some in­sidious cor­rupter of de­bates de­cided that their preferred out­come would be best served by forc­ing dis­cus­sants to start out by re­fut­ing Ter­mi­na­tor, they would have done well in skew­ing the frame. De­bat­ing gun con­trol, the NRA spokesper­son does not wish to be in­tro­duced as a “shoot­ing freak,” the anti-gun op­po­nent does not wish to be in­tro­duced as a “vic­tim disar­ma­ment ad­vo­cate.” Why should you al­low the same or­der of frame-skew­ing by Hol­ly­wood scriptwrit­ers, even ac­ci­den­tally?

Jour­nal­ists don’t tell me, “The fu­ture will be like 2001.” But they ask, “Will the fu­ture be like 2001, or will it be like A.I.?” This is just as huge a fram­ing is­sue as ask­ing, “Should we cut benefits for dis­abled vet­er­ans, or raise taxes on the rich?”

In the an­ces­tral en­vi­ron­ment, there were no mov­ing pic­tures; what you saw with your own eyes was true. A mo­men­tary glimpse of a sin­gle word can prime us and make com­pat­i­ble thoughts more available, with demon­strated strong in­fluence on prob­a­bil­ity es­ti­mates. How much havoc do you think a two-hour movie can wreak on your judg­ment? It will be hard enough to undo the dam­age by de­liber­ate con­cen­tra­tion—why in­vite the vam­pire into your house? In Chess or Go, ev­ery wasted move is a loss; in ra­tio­nal­ity, any non-ev­i­den­tial in­fluence is (on av­er­age) en­tropic.

Do movie-view­ers suc­ceed in un­be­liev­ing what they see? So far as I can tell, few movie view­ers act as if they have di­rectly ob­served Earth’s fu­ture. Peo­ple who watched the Ter­mi­na­tor movies didn’t hide in fal­lout shelters on Au­gust 29, 1997. But those who com­mit the fal­lacy seem to act as if they had seen the movie events oc­cur­ring on some other planet; not Earth, but some­where similar to Earth.

You say, “Sup­pose we build a very smart AI,” and they say, “But didn’t that lead to nu­clear war in The Ter­mi­na­tor?” As far as I can tell, it’s iden­ti­cal rea­son­ing, down to the tone of voice, of some­one who might say: “But didn’t that lead to nu­clear war on Alpha Cen­tauri?” or “Didn’t that lead to the fall of the Ital­ian city-state of Pic­colo in the four­teenth cen­tury?” The movie is not be­lieved, but it is cog­ni­tively available. It is treated, not as a prophecy, but as an illus­tra­tive his­tor­i­cal case. Will his­tory re­peat it­self? Who knows?

In a re­cent in­tel­li­gence ex­plo­sion dis­cus­sion, some­one men­tioned that Vinge didn’t seem to think that brain-com­puter in­ter­faces would in­crease in­tel­li­gence much, and cited Ma­rooned in Real­time and Tunç Blu­men­thal, who was the most ad­vanced trav­el­ler but didn’t seem all that pow­er­ful. I replied in­dig­nantly, “But Tunç lost most of his hard­ware! He was crip­pled!” And then I did a men­tal dou­ble-take and thought to my­self: What the hell am I say­ing.

Does the is­sue not have to be ar­gued in its own right, re­gard­less of how Vinge de­picted his char­ac­ters? Tunç Blu­men­thal is not “crip­pled,” he’s un­real. I could say “Vinge chose to de­pict Tunç as crip­pled, for rea­sons that may or may not have had any­thing to do with his per­sonal best fore­cast,” and that would give his au­tho­rial choice an ap­pro­pri­ate weight of ev­i­dence. I can­not say “Tunç was crip­pled.” There is no was of Tunç Blu­men­thal.

I de­liber­ately left in a mis­take I made, in my first draft of the be­gin­ning of this es­say: “Others defend their use of the ex­am­ple, dis­agree­ing that it’s a fal­lacy.” But The Ma­trix is not an ex­am­ple!

A neigh­bor­ing flaw is the log­i­cal fal­lacy of ar­gu­ing from imag­i­nary ev­i­dence: “Well, if you did go to the end of the rain­bow, you would find a pot of gold—which just proves my point!” (Up­dat­ing on ev­i­dence pre­dicted, but not ob­served, is the math­e­mat­i­cal mir­ror image of hind­sight bias.)

The brain has many mechanisms for gen­er­al­iz­ing from ob­ser­va­tion, not just the availa­bil­ity heuris­tic. You see three ze­bras, you form the cat­e­gory “ze­bra,” and this cat­e­gory em­bod­ies an au­to­matic per­cep­tual in­fer­ence. Horse-shaped crea­tures with white and black stripes are clas­sified as “Ze­bras,” there­fore they are fast and good to eat; they are ex­pected to be similar to other ze­bras ob­served.

So peo­ple see (mov­ing pic­tures of) three Borg, their brain au­to­mat­i­cally cre­ates the cat­e­gory “Borg,” and they in­fer au­to­mat­i­cally that hu­mans with brain-com­puter in­ter­faces are of class “Borg” and will be similar to other Borg ob­served: cold, un­com­pas­sion­ate, dress­ing in black leather, walk­ing with heavy me­chan­i­cal steps. Jour­nal­ists don’t be­lieve that the fu­ture will con­tain Borg—they don’t be­lieve Star Trek is a prophecy. But when some­one talks about brain-com­puter in­ter­faces, they think, “Will the fu­ture con­tain Borg?” Not, “How do I know com­puter-as­sisted telepa­thy makes peo­ple less nice?” Not, “I’ve never seen a Borg and never has any­one else.” Not, “I’m form­ing a racial stereo­type based on liter­ally zero ev­i­dence.”

As Ge­orge Or­well said of cliches:4

What is above all needed is to let the mean­ing choose the word, and not the other way around . . . When you think of some­thing ab­stract you are more in­clined to use words from the start, and un­less you make a con­scious effort to pre­vent it, the ex­ist­ing di­alect will come rush­ing in and do the job for you, at the ex­pense of blur­ring or even chang­ing your mean­ing.

Yet in my es­ti­ma­tion, the most dam­ag­ing as­pect of us­ing other au­thors’ imag­i­na­tions is that it stops peo­ple from us­ing their own. As Robert Pir­sig said:5

She was blocked be­cause she was try­ing to re­peat, in her writ­ing, things she had already heard, just as on the first day he had tried to re­peat things he had already de­cided to say. She couldn’t think of any­thing to write about Boze­man be­cause she couldn’t re­call any­thing she had heard worth re­peat­ing. She was strangely un­aware that she could look and see freshly for her­self, as she wrote, with­out pri­mary re­gard for what had been said be­fore.

Re­mem­bered fic­tions rush in and do your think­ing for you; they sub­sti­tute for see­ing—the dead­liest con­ve­nience of all.

1Nick Bostrom, “Ex­is­ten­tial Risks: An­a­lyz­ing Hu­man Ex­tinc­tion Sce­nar­ios and Re­lated Hazards,” Jour­nal of Evolu­tion and Tech­nol­ogy 9 (2002), http://​​www.jet­press.org/​​vol­ume9/​​risks.html.

2E.g., Han­son’s (2006) “Bi­ases of Science Fic­tion.” http://​​www.over­com­ing­bias.com/​​2006/​​12/​​bi­ases_of_scien.html.

3See “The Third Alter­na­tive” in this vol­ume, and “Oc­cam’s Ra­zor” and “Bur­den­some De­tails” in Map and Ter­ri­tory.

4Or­well, “Poli­tics and the English Lan­guage.”

5Pir­sig, Zen and the Art of Mo­tor­cy­cle Main­te­nance.