The Pascal’s Wager Fallacy Fallacy

To­day at lunch I was dis­cussing in­ter­est­ing facets of sec­ond-or­der logic, such as the (known) fact that first-or­der logic can­not, in gen­eral, dis­t­in­guish finite mod­els from in­finite mod­els. The con­ver­sa­tion branched out, as such things do, to why you would want a cog­ni­tive agent to think about finite num­bers that were un­bound­edly large, as op­posed to bound­edly large.

So I ob­served that:

  1. Although the laws of physics as we know them don’t al­low any agent to sur­vive for in­finite sub­jec­tive time (do an un­bound­edly long se­quence of com­pu­ta­tions), it’s pos­si­ble that our model of physics is mis­taken. (I go into some de­tail on this pos­si­bil­ity be­low the cut­off.)

  2. If it is pos­si­ble for an agent—or, say, the hu­man species—to have an in­finite fu­ture, and you cut your­self off from that in­finite fu­ture and end up stuck in a fu­ture that is merely very large, this one mis­take out­weighs all the finite mis­takes you made over the course of your ex­is­tence.

And the one said, “Isn’t that a form of Pas­cal’s Wager?”

I’m go­ing to call this the Pas­cal’s Wager Fal­lacy Fal­lacy.

You see it all the time in dis­cus­sion of cry­on­ics. The one says, “If cry­on­ics works, then the pay­off could be, say, at least a thou­sand ad­di­tional years of life.” And the other one says, “Isn’t that a form of Pas­cal’s Wager?”

The origi­nal prob­lem with Pas­cal’s Wager is not that the pur­ported pay­off is large. This is not where the flaw in the rea­son­ing comes from. That is not the prob­le­matic step. The prob­lem with Pas­cal’s origi­nal Wager is that the prob­a­bil­ity is ex­po­nen­tially tiny (in the com­plex­ity of the Chris­tian God) and that equally large tiny prob­a­bil­ities offer op­po­site pay­offs for the same ac­tion (the Mus­lim God will damn you for be­liev­ing in the Chris­tian God).

How­ever, what we have here is the term “Pas­cal’s Wager” be­ing ap­plied solely be­cause the pay­off be­ing con­sid­ered is large—the rea­son­ing be­ing per­cep­tu­ally rec­og­nized as an in­stance of “the Pas­cal’s Wager fal­lacy” as soon as some­one men­tions a big pay­off—with­out any at­ten­tion be­ing given to whether the prob­a­bil­ities are in fact small or whether coun­ter­bal­anc­ing anti-pay­offs ex­ist.

And then, once the rea­son­ing is per­cep­tu­ally rec­og­nized as an in­stance of “the Pas­cal’s Wager fal­lacy”, the other char­ac­ter­is­tics of the fal­lacy are au­to­mat­i­cally in­ferred: they as­sume that the prob­a­bil­ity is tiny and that the sce­nario has no spe­cific sup­port apart from the pay­off.

But in­finite physics and cry­on­ics are both pos­si­bil­ities that, leav­ing their pay­offs en­tirely aside, get sig­nifi­cant chunks of prob­a­bil­ity mass purely on merit.

Yet in­stead we have rea­son­ing that runs like this:

  1. Cry­on­ics has a large pay­off;

  2. There­fore, the ar­gu­ment car­ries even if the prob­a­bil­ity is tiny;

  3. There­fore, the prob­a­bil­ity is tiny;

  4. There­fore, why bother think­ing about it?

(Posted here in­stead of Less Wrong, at least for now, be­cause of the Han­son/​Cowen de­bate on cry­on­ics.)

Fur­ther de­tails:

Pas­cal’s Wager is ac­tu­ally a se­ri­ous prob­lem for those of us who want to use Kol­mogorov com­plex­ity as an Oc­cam prior, be­cause the size of even the finite com­pu­ta­tions blows up much faster than their prob­a­bil­ity diminishes (see here).

See Bostrom on in­finite ethics for how much worse things get if you al­low non-halt­ing Tur­ing ma­chines.

In our cur­rent model of physics, time is in­finite, and so the col­lec­tion of real things is in­finite. Each time state has a suc­ces­sor state, and there’s no par­tic­u­lar as­ser­tion that time re­turns to the start­ing point. Con­sid­er­ing time’s con­ti­nu­ity just makes it worse—now we have an un­countable set of real things!

But cur­rent physics also says that any finite amount of mat­ter can only do a finite amount of com­pu­ta­tion, and the uni­verse is ex­pand­ing too fast for us to col­lect an in­finite amount of mat­ter. We can­not, on the face of things, ex­pect to think an un­bound­edly long se­quence of thoughts.

The laws of physics can­not be eas­ily mod­ified to per­mit im­mor­tal­ity: light­speed limits and an ex­pand­ing uni­verse and holo­graphic limits on quan­tum en­tan­gle­ment and so on all make it in­con­ve­nient to say the least.

On the other hand, many com­pu­ta­tion­ally sim­ple laws of physics, like the laws of Con­way’s Life, per­mit in­definitely run­ning Tur­ing ma­chines to be en­coded. So we can’t say that it re­quires a com­plex mir­a­cle for us to con­front the prospect of un­bound­edly long-lived, un­bound­edly large civ­i­liza­tions. Just there be­ing a lot more to dis­cover about physics—say, one more dis­cov­ery of the size of quan­tum me­chan­ics or Spe­cial Rel­a­tivity—might be enough to knock (our model of) physics out of the re­gion that cor­re­sponds to “You can only run bound­edly large Tur­ing ma­chines”.

So while we have no par­tic­u­lar rea­son to ex­pect physics to al­low un­bounded com­pu­ta­tion, it’s not a small, spe­cial, un­jus­tifi­ably sin­gled-out pos­si­bil­ity like the Chris­tian God; it’s a large re­gion of what var­i­ous pos­si­ble phys­i­cal laws will al­low.

And cry­on­ics, of course, is the de­fault ex­trap­o­la­tion from known neu­ro­science: if mem­o­ries are stored the way we now think, and cry­on­ics or­ga­ni­za­tions are not dis­turbed by any par­tic­u­lar catas­tro­phe, and tech­nol­ogy goes on ad­vanc­ing to­ward the phys­i­cal limits, then it is pos­si­ble to re­vive a cry­on­ics pa­tient (and yes you are the same per­son). There are nega­tive pos­si­bil­ities (wo­ken up in dystopia and not al­lowed to die) but they are ex­otic, not hav­ing equal prob­a­bil­ity weight to coun­ter­bal­ance the pos­i­tive pos­si­bil­ities.