Being Half-Rational About Pascal’s Wager is Even Worse

For so long as I can re­mem­ber, I have re­jected Pas­cal’s Wager in all its forms on sheerly prac­ti­cal grounds: any­one who tries to plan out their life by chas­ing a 1 in 10,000 chance of a huge pay­off is al­most cer­tainly doomed in prac­tice. This kind of clever rea­son­ing never pays off in real life...

...un­less you have also un­der­es­ti­mated the allegedly tiny chance of the large im­pact.

For ex­am­ple. At one crit­i­cal junc­tion in his­tory, Leo Szilard, the first physi­cist to see the pos­si­bil­ity of fis­sion chain re­ac­tions and hence prac­ti­cal nu­clear weapons, was try­ing to per­suade En­rico Fermi to take the is­sue se­ri­ously, in the com­pany of a more pres­ti­gious friend, Isi­dor Rabi:

I said to him: “Did you talk to Fermi?” Rabi said, “Yes, I did.” I said, “What did Fermi say?” Rabi said, “Fermi said ‘Nuts!‘” So I said, “Why did he say ‘Nuts!‘?” and Rabi said, “Well, I don’t know, but he is in and we can ask him.” So we went over to Fermi’s office, and Rabi said to Fermi, “Look, Fermi, I told you what Szilard thought and you said ‘Nuts!’ and Szilard wants to know why you said ‘Nuts!‘” So Fermi said, “Well… there is the re­mote pos­si­bil­ity that neu­trons may be emit­ted in the fis­sion of ura­nium and then of course per­haps a chain re­ac­tion can be made.” Rabi said, “What do you mean by ‘re­mote pos­si­bil­ity’?” and Fermi said, “Well, ten per cent.” Rabi said, “Ten per cent is not a re­mote pos­si­bil­ity if it means that we may die of it. If I have pneu­mo­nia and the doc­tor tells me that there is a re­mote pos­si­bil­ity that I might die, and it’s ten per­cent, I get ex­cited about it.” (Quoted in ‘The Mak­ing of the Atomic Bomb’ by Richard Rhodes.)

This might look at first like a suc­cess­ful ap­pli­ca­tion of “mul­ti­ply­ing a low prob­a­bil­ity by a high im­pact”, but I would re­ject that this was re­ally go­ing on. Where the heck did Fermi get that 10% figure for his ‘re­mote pos­si­bil­ity’, es­pe­cially con­sid­er­ing that fis­sion chain re­ac­tions did in fact turn out to be pos­si­ble? If some sort of rea­son­ing had told us that a fis­sion chain re­ac­tion was im­prob­a­ble, then af­ter it turned out to be re­al­ity, good pro­ce­dure would have us go back and check our rea­son­ing to see what went wrong, and figure out how to ad­just our way of think­ing so as to not make the same mis­take again. So far as I know, there was no phys­i­cal rea­son what­so­ever to think a fis­sion chain re­ac­tion was only a ten per­cent prob­a­bil­ity. They had not been demon­strated ex­per­i­men­tally, to be sure; but they were still the de­fault pro­jec­tion from what was already known. If you’d been told in the 1930s that fis­sion chain re­ac­tions were im­pos­si­ble, you would’ve been told some­thing that im­plied new phys­i­cal facts un­known to cur­rent sci­ence (and in­deed, no such facts ex­isted). After read­ing enough his­tor­i­cal in­stances of fa­mous sci­en­tists dis­miss­ing things as im­pos­si­ble when there was no phys­i­cal logic to say that it was even im­prob­a­ble, one cyn­i­cally sus­pects that some pres­ti­gious sci­en­tists per­haps came to con­ceive of them­selves as se­nior peo­ple who ought to be skep­ti­cal about things, and that Fermi was just re­act­ing emo­tion­ally. The les­son I draw from this his­tor­i­cal case is not that it’s a good idea to go around mul­ti­ply­ing ten per­cent prob­a­bil­ities by large im­pacts, but that Fermi should not have pul­led out a num­ber as low as ten per­cent.

Hav­ing seen enough con­ver­sa­tions in­volv­ing made-up prob­a­bil­ities to be­come cyn­i­cal, I also strongly sus­pect that if Fermi had fore­seen how Rabi would re­ply, Fermi would’ve said “One per­cent”. If Fermi had ex­pected Rabi to say “One per­cent is not small if...” then Fermi would’ve said “One in ten thou­sand” or “Too small to con­sider”—what­ever he thought would get him off the hook. Per­haps I am be­ing too un­kind to Fermi, who was a fa­mously great es­ti­ma­tor; Fermi may well have performed some sort of lawful prob­a­bil­ity es­ti­mate on the spot. But Fermi is also the one who said that nu­clear en­ergy was fifty years off in the un­likely event it could be done at all, two years (IIRC) be­fore Fermi him­self over­saw the con­struc­tion of the first nu­clear pile. Where did Fermi get that fifty-year num­ber from? This sort of thing does make me more likely to be­lieve that Fermi, in play­ing the role of the solemn doubter, was just Mak­ing Things Up; and this is no less a sin when you make up skep­ti­cal things. And if this cyn­i­cism is right, then we can­not learn the les­son that it is wise to mul­ti­ply small prob­a­bil­ities by large im­pacts be­cause this is what saved Fermi—if Fermi had known the rule, if he had seen it com­ing, he would have just Made Up an even smaller prob­a­bil­ity to get him­self off the hook. It would have been so very easy and con­ve­nient to say, “One in ten thou­sand, there’s no ex­per­i­men­tal proof and most ideas like that are wrong! Think of all the con­junc­tive prob­a­bil­ities that have to be true be­fore we ac­tu­ally get nu­clear weapons and our own efforts ac­tu­ally made a differ­ence in that!” fol­lowed shortly by “But it’s not prac­ti­cal to be wor­ried about such tiny prob­a­bil­ities!” Or maybe Fermi would’ve known bet­ter, but even so I have never been a fan of try­ing to have two mis­takes can­cel each other out.

I men­tion all this be­cause it is dan­ger­ous to be half a ra­tio­nal­ist, and only stop mak­ing one of the two mis­takes. If you are go­ing to re­ject im­prac­ti­cal ‘clever ar­gu­ments’ that would never work in real life, and hence­forth not try to mul­ti­ply tiny prob­a­bil­ities by huge pay­offs, then you had also bet­ter re­ject all the clever ar­gu­ments that would’ve led Fermi or Szilard to as­sign prob­a­bil­ities much smaller than ten per­cent. (List­ing out a group of con­junc­tive prob­a­bil­ities lead­ing up to tak­ing an im­por­tant ac­tion, and not list­ing any dis­junc­tive prob­a­bil­ities, is one widely pop­u­lar way of driv­ing down the ap­par­ent prob­a­bil­ity of just about any­thing.) Or if you would’ve tried to put fis­sion chain re­ac­tions into a refer­ence class of ‘amaz­ing new en­ergy sources’ and then as­signed it a tiny prob­a­bil­ity, or put Szilard into the refer­ence class of ‘peo­ple who think the fate of the world de­pends on them’, or pon­tif­i­cated about the lack of any pos­i­tive ex­per­i­men­tal ev­i­dence prov­ing that a chain re­ac­tion was pos­si­ble, blah blah blah etcetera—then your er­ror here can per­haps be com­pen­sated for by the op­po­site er­ror of then try­ing to mul­ti­ply the re­sult­ing tiny prob­a­bil­ity by a large im­pact. I don’t like mak­ing clever mis­takes that can­cel each other out—I con­sider that idea to also be clever—but mak­ing clever mis­takes that don’t can­cel out is worse.

On the other hand, if you want a gen­eral heuris­tic that could’ve led Fermi to do bet­ter, I would sug­gest rea­son­ing that pre­vi­ous-his­tor­i­cal ex­per­i­men­tal proof of a chain re­ac­tion would not be strongly be ex­pected even in wor­lds where it was pos­si­ble, and that to dis­cover a chain re­ac­tion to be im­pos­si­ble would im­ply learn­ing some new fact of phys­i­cal sci­ence which was not already known. And this is not just 20-20 hind­sight; Szilard and Rabi saw the logic in ad­vance of the fact, not just af­ter­ward—though not in those ex­act terms; they just saw the phys­i­cal logic, and then didn’t ad­just it down­ward for ‘ab­sur­dity’ or with more com­pli­cated ra­tio­nal­iza­tions. But then if you are go­ing to take this sort of rea­son­ing at face value, with­out ad­just­ing it down­ward, then it’s prob­a­bly not a good idea to panic ev­ery time you as­sign a 0.01% prob­a­bil­ity to some­thing big—you’ll prob­a­bly run into dozens of things like that, at least, and pan­ick­ing over them would leave no room to wait un­til you found some­thing whose face-value prob­a­bil­ity was large.

I don’t be­lieve in mul­ti­ply­ing tiny prob­a­bil­ities by huge im­pacts. But I also be­lieve that Fermi could have done bet­ter than say­ing ten per­cent, and that it wasn’t just ran­dom luck mixed with over­con­fi­dence that led Szilard and Rabi to as­sign higher prob­a­bil­ities than that. Or to name a mod­ern is­sue which is still open, Michael Sher­mer should not have dis­missed the pos­si­bil­ity of molec­u­lar nan­otech­nol­ogy, and Eric Drexler will not have been ran­domly lucky when it turns out to work: tak­ing cur­rent phys­i­cal mod­els at face value im­ply that molec­u­lar nan­otech­nol­ogy ought to work, and if it doesn’t work we’ve learned some new fact un­known to pre­sent physics, etcetera. Tak­ing the phys­i­cal logic at face value is fine, and there’s no need to ad­just it down­ward for any par­tic­u­lar rea­son; if you say that Eric Drexler should ‘ad­just’ this prob­a­bil­ity down­ward for what­ever rea­son, then I think you’re giv­ing him rules that pre­dictably give him the wrong an­swer. Some­times sur­face ap­pear­ances are mis­lead­ing, but most of the time they’re not.

A key test I ap­ply to any sup­posed rule of rea­son­ing about high-im­pact sce­nar­ios is, “Does this rule screw over the planet if Real­ity ac­tu­ally hands us a high-im­pact sce­nario?” and if the an­swer is yes, I dis­card it and move on. The point of ra­tio­nal­ity is to figure out which world we ac­tu­ally live in and adapt ac­cord­ingly, not to rule out cer­tain sorts of wor­lds in ad­vance.

There’s a dou­bly-clever form of the ar­gu­ment wherein ev­ery­one in a plau­si­bly high-im­pact po­si­tion mod­estly at­tributes only a tiny po­ten­tial pos­si­bil­ity that their face-value view of the world is sane, and then they mul­ti­ply this tiny prob­a­bil­ity by the large im­pact, and so they act any­way and on av­er­age wor­lds in trou­ble are saved. I don’t think this works in real life—I don’t think I would have wanted Leo Szilard to think like that. I think that if your brain re­ally ac­tu­ally thinks that fis­sion chain re­ac­tions have only a tiny prob­a­bil­ity of be­ing im­por­tant, you will go off and try to in­vent bet­ter re­friger­a­tors or some­thing else that might make you money. And if your brain does not re­ally feel that fis­sion chain re­ac­tions have a tiny prob­a­bil­ity, then your be­liefs and aliefs are out of sync and that is not some­thing I want to see in peo­ple try­ing to han­dle the del­i­cate is­sue of nu­clear weapons. But in any case, I deny the origi­nal premise: I do not think the world’s niches for hero­ism must be pop­u­lated by heroes who are in­ca­pable in prin­ci­ple of rea­son­ably dis­t­in­guish­ing them­selves from a pop­u­la­tion of crack­pots, all of whom have no choice but to con­tinue on the tiny off-chance that they are not crack­pots.

I haven’t writ­ten enough about what I’ve be­gun think­ing of as ‘heroic episte­mol­ogy’ - why, how can you pos­si­bly be so over­con­fi­dent as to dare even try to have a huge pos­i­tive im­pact when most peo­ple in that refer­ence class blah blah blah—but on re­flec­tion, it seems to me that an awful lot of my an­swer boils down to not try­ing to be clever about it. I don’t mul­ti­ply tiny prob­a­bil­ities by huge im­pacts. I also don’t get tiny prob­a­bil­ities by putting my­self into in­escapable refer­ence classes, for this is the sort of rea­son­ing that would screw over planets that ac­tu­ally were in trou­ble if ev­ery­one thought like that. In the course of any work­day, on the now very rare oc­ca­sions I find my­self think­ing about such meta-level junk in­stead of the math at hand, I re­mind my­self that it is a wasted mo­tion—where a ‘wasted mo­tion’ is any thought which will, in ret­ro­spect if the prob­lem is in fact solved, not have con­tributed to hav­ing solved the prob­lem. If some­day Friendly AI is built, will it have been ter­ribly im­por­tant that some­one have spent a month fret­ting about what refer­ence class they’re in? No. Will it, in ret­ro­spect, have been an im­por­tant step along the path­way to un­der­stand­ing sta­ble self-mod­ifi­ca­tion, if we spend time try­ing to solve the Lo­bian ob­sta­cle? Pos­si­bly. So one of these cog­ni­tive av­enues is pre­dictably a wasted mo­tion in ret­ro­spect, and one of them is not. The same would hold if I spent a lot of time try­ing to con­vince my­self that I was al­lowed to be­lieve that I could af­fect any­thing large, or any other form of angst­ing about meta. It is pre­dictable that in ret­ro­spect I will think this was a waste of time com­pared to work­ing on a trust crite­rion be­tween a prob­a­bil­ity dis­tri­bu­tion and an im­proved prob­a­bil­ity dis­tri­bu­tion. (Apolo­gies, this is a tech­ni­cal thingy I’m cur­rently work­ing on which has no good English de­scrip­tion.)

But if you must ap­ply clever ad­just­ments to things, then for Bel­l­dandy’s sake don’t be one-sid­edly clever and have all your clev­er­ness be on the side of ar­gu­ments for in­ac­tion. I think you’re bet­ter off with­out all the com­pli­cated fret­ting—but you’re definitely not bet­ter off elimi­nat­ing only half of it.

And fi­nally, I once again state that I ab­jure, re­fute, and dis­claim all forms of Pas­calian rea­son­ing and mul­ti­ply­ing tiny prob­a­bil­ities by large im­pacts when it comes to ex­is­ten­tial risk. We live on a planet with up­com­ing prospects of, among other things, hu­man in­tel­li­gence en­hance­ment, molec­u­lar nan­otech­nol­ogy, suffi­ciently ad­vanced biotech­nol­ogy, brain-com­puter in­ter­faces, and of course Ar­tifi­cial In­tel­li­gence in sev­eral guises. If some­thing has only a tiny chance of im­pact­ing the fate of the world, there should be some­thing with a larger prob­a­bil­ity of an equally huge im­pact to worry about in­stead. You can­not jus­tifi­ably trade off tiny prob­a­bil­ities of x-risk im­prove­ment against efforts that do not effec­tu­ate a happy in­ter­galac­tic civ­i­liza­tion, but there is nonethe­less no need to go on track­ing tiny prob­a­bil­ities when you’d ex­pect there to be medium-sized prob­a­bil­ities of x-risk re­duc­tion. Nonethe­less I try to avoid com­ing up with clever rea­sons to do stupid things, and one ex­am­ple of a stupid thing would be not work­ing on Friendly AI when it’s in blatant need of work. Elab­o­rate com­pli­cated rea­son­ing which says we should let the Friendly AI is­sue just stay on fire and burn mer­rily away, well, any com­pli­cated rea­son­ing which re­turns an out­put this silly is au­to­mat­i­cally sus­pect.

If, how­ever, you are un­lucky enough to have been clev­erly ar­gued into obey­ing rules that make it a pri­ori un­reach­able-in-prac­tice for any­one to end up in an epistemic state where they try to do some­thing about a planet which ap­pears to be on fire—so that there are no more plau­si­ble x-risk re­duc­tion efforts to fall back on, be­cause you’re ad­just­ing all the high-im­pact prob­a­bil­ities down­ward from what the sur­face state of the world sug­gests...

Well, that would only be a good idea if Real­ity were not al­lowed to hand you a planet that was in fact on fire. Or if, given a planet on fire, Real­ity was pro­hibited from hand­ing you a chance to put it out. There is no rea­son to think that Real­ity must a pri­ori obey such a con­straint.

EDIT: To clar­ify, “Don’t mul­ti­ply tiny prob­a­bil­ities by large im­pacts” is some­thing that I ap­ply to large-scale pro­jects and lines of his­tor­i­cal prob­a­bil­ity. On a very large scale, if you think FAI stands a se­ri­ous chance of sav­ing the world, then hu­man­ity should dump a bunch of effort into it, and if no­body’s dump­ing effort into it then you should dump more effort than cur­rently into it. On a smaller scale, to com­pare two x-risk miti­ga­tion pro­jects in de­mand of money, you need to es­ti­mate some­thing about marginal im­pacts of the next added effort (where the com­mon cur­rency of utilons should prob­a­bly not be lives saved, but “prob­a­bil­ity of an ok out­come”, i.e., the prob­a­bil­ity of end­ing up with a happy in­ter­galac­tic civ­i­liza­tion). In this case the av­er­age marginal added dol­lar can only ac­count for a very tiny slice of prob­a­bil­ity, but this is not Pas­cal’s Wager. Large efforts with a suc­cess-or-failure crite­rion are rightly, justly, and un­avoid­ably go­ing to end up with small marginally in­creased prob­a­bil­ities of suc­cess per added small unit of effort. It would only be Pas­cal’s Wager if the whole route-to-an-OK-out­come were as­signed a tiny prob­a­bil­ity, and then a large pay­off used to shut down fur­ther dis­cus­sion of whether the next unit of effort should go there or to a differ­ent x-risk.