# Confidence levels inside and outside an argument

Re­lated to: In­finite Certainty

Sup­pose the peo­ple at FiveThir­tyEight have cre­ated a model to pre­dict the re­sults of an im­por­tant elec­tion. After crunch­ing poll data, area de­mo­graph­ics, and all the usual things one crunches in such a situ­a­tion, their model re­turns a greater than 999,999,999 in a billion chance that the in­cum­bent wins the elec­tion. Sup­pose fur­ther that the re­sults of this model are your only data and you know noth­ing else about the elec­tion. What is your con­fi­dence level that the in­cum­bent wins the elec­tion?

Mine would be sig­nifi­cantly less than 999,999,999 in a billion.

When an ar­gu­ment gives a prob­a­bil­ity of 999,999,999 in a billion for an event, then prob­a­bly the ma­jor­ity of the prob­a­bil­ity of the event is no longer in “But that still leaves a one in a billion chance, right?”. The ma­jor­ity of the prob­a­bil­ity is in “That ar­gu­ment is flawed”. Even if you have no par­tic­u­lar rea­son to be­lieve the ar­gu­ment is flawed, the back­ground chance of an ar­gu­ment be­ing flawed is still greater than one in a billion.

More than one in a billion times a poli­ti­cal sci­en­tist writes a model, ey will get com­pletely con­fused and write some­thing with no re­la­tion to re­al­ity. More than one in a billion times a pro­gram­mer writes a pro­gram to crunch poli­ti­cal statis­tics, there will be a bug that com­pletely in­val­i­dates the re­sults. More than one in a billion times a staffer at a web­site pub­lishes the re­sults of a poli­ti­cal calcu­la­tion on­line, ey will ac­ci­den­tally switch which can­di­date goes with which chance of win­ning.

So one must dis­t­in­guish be­tween lev­els of con­fi­dence in­ter­nal and ex­ter­nal to a spe­cific model or ar­gu­ment. Here the model’s in­ter­nal level of con­fi­dence is 999,999,999/​billion. But my ex­ter­nal level of con­fi­dence should be lower, even if the model is my only ev­i­dence, by an amount pro­por­tional to my trust in the model.

Is That Really True?

One might be tempted to re­spond “But there’s an equal chance that the false model is too high, ver­sus that it is too low.” Maybe there was a bug in the com­puter pro­gram, but it pre­vented it from giv­ing the in­cum­bent’s real chances of 999,999,999,999 out of a trillion.

The prior prob­a­bil­ity of a can­di­date win­ning an elec­tion is 50%1. We need in­for­ma­tion to push us away from this prob­a­bil­ity in ei­ther di­rec­tion. To push sig­nifi­cantly away from this prob­a­bil­ity, we need strong in­for­ma­tion. Any weak­ness in the in­for­ma­tion weak­ens its abil­ity to push away from the prior. If there’s a flaw in FiveThir­tyEight’s model, that takes us away from their prob­a­bil­ity of 999,999,999 in of a billion, and back closer to the prior prob­a­bil­ity of 50%

We can con­firm this with a quick san­ity check. Sup­pose we know noth­ing about the elec­tion (ie we still think it’s 50-50) un­til an in­sane per­son re­ports a hal­lu­ci­na­tion that an an­gel has de­clared the in­cum­bent to have a 999,999,999/​billion chance. We would not be tempted to ac­cept this figure on the grounds that it is equally likely to be too high as too low.

A sec­ond ob­jec­tion cov­ers situ­a­tions such as a lot­tery. I would like to say the chance that Bob wins a lot­tery with one billion play­ers is 11 billion. Do I have to ad­just this up­ward to cover the pos­si­bil­ity that my model for how lot­ter­ies work is some­how flawed? No. Even if I am mi­s­un­der­stand­ing the lot­tery, I have not de­parted from my prior. Here, new in­for­ma­tion re­ally does have an equal chance of go­ing against Bob as of go­ing in his fa­vor. For ex­am­ple, the lot­tery may be fixed (mean­ing my origi­nal model of how to de­ter­mine lot­tery win­ners is fatally flawed), but there is no greater rea­son to be­lieve it is fixed in fa­vor of Bob than any­one else.2

Spot­ted in the Wild

The re­cent Pas­cal’s Mug­ging thread spawned a dis­cus­sion of the Large Hadron Col­lider de­stroy­ing the uni­verse, which also got con­tinued on an older LHC thread from a few years ago. Every­one in­volved agreed the chances of the LHC de­stroy­ing the world were less than one in a mil­lion, but sev­eral peo­ple gave ex­traor­di­nar­ily low chances based on cos­mic ray col­li­sions. The ar­gu­ment was that since cos­mic rays have been perform­ing par­ti­cle col­li­sions similar to the LHC’s zillions of times per year, the chance that the LHC will de­stroy the world is ei­ther liter­ally zero, or else a num­ber re­lated to the prob­a­bil­ity that there’s some chance of a cos­mic ray de­stroy­ing the world so minis­cule that it hasn’t got­ten ac­tu­al­ized in zillions of cos­mic ray col­li­sions. Of the com­menters men­tion­ing this ar­gu­ment, one gave a prob­a­bil­ity of 1/​3*10^22, an­other sug­gested 1/​10^25, both of which may be good num­bers for the in­ter­nal con­fi­dence of this ar­gu­ment.

But the con­nec­tion be­tween this ar­gu­ment and the gen­eral LHC ar­gu­ment flows through state­ments like “col­li­sions pro­duced by cos­mic rays will be ex­actly like those pro­duced by the LHC”, “our un­der­stand­ing of the prop­er­ties of cos­mic rays is largely cor­rect”, and “I’m not high on drugs right now, star­ing at a pack­age of M&Ms and mis­tak­ing it for a re­ally in­tel­li­gent ar­gu­ment that bears on the LHC ques­tion”, all of which are prob­a­bly more likely than 1/​10^20. So in­stead of say­ing “the prob­a­bil­ity of an LHC apoc­a­lypse is now 1/​10^20”, say “I have an ar­gu­ment that has an in­ter­nal prob­a­bil­ity of an LHC apoc­a­lypse as 1/​10^20, which low­ers my prob­a­bil­ity a bit de­pend­ing on how much I trust that ar­gu­ment”.

In fact, the ar­gu­ment has a po­ten­tial flaw: ac­cord­ing to Gid­dings and Mangano, the physi­cists offi­cially tasked with in­ves­ti­gat­ing LHC risks, black holes from cos­mic rays might have enough mo­men­tum to fly through Earth with­out harm­ing it, and black holes from the LHC might not3. This was pre­dictable: this was a sim­ple ar­gu­ment in a com­plex area try­ing to prove a nega­tive, and it would have been pre­sump­tous to be­lieve with greater than 99% prob­a­bil­ity that it was flawless. If you can only give 99% prob­a­bil­ity to the ar­gu­ment be­ing sound, then it can only re­duce your prob­a­bil­ity in the con­clu­sion by a fac­tor of a hun­dred, not a fac­tor of 10^20.

But it’s hard for me to be prop­erly out­raged about this, since the LHC did not de­stroy the world. A bet­ter ex­am­ple might be the fol­low­ing, taken from an on­line dis­cus­sion of cre­ation­ism4 and ap­par­ently based off of some­thing by Fred Hoyle:

In or­der for a sin­gle cell to live, all of the parts of the cell must be as­sem­bled be­fore life starts. This in­volves 60,000 pro­teins that are as­sem­bled in roughly 100 differ­ent com­bi­na­tions. The prob­a­bil­ity that these com­plex group­ings of pro­teins could have hap­pened just by chance is ex­tremely small. It is about 1 chance in 10 to the 4,478,296 power. The prob­a­bil­ity of a liv­ing cell be­ing as­sem­bled just by chance is so small, that you may as well con­sider it to be im­pos­si­ble. This means that the prob­a­bil­ity that the liv­ing cell is cre­ated by an in­tel­li­gent cre­ator, that de­signed it, is ex­tremely large. The prob­a­bil­ity that God cre­ated the liv­ing cell is 10 to the 4,478,296 power to 1.

Note that some­one just gave a con­fi­dence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever hap­pen. This is pos­si­bly the most wrong any­one has ever been.

It is hard to say in words ex­actly how wrong this is. Say­ing “This per­son would be will­ing to bet the en­tire world GDP for a thou­sand years if evolu­tion were true against a one in one mil­lion chance of re­ceiv­ing a sin­gle penny if cre­ation­ism were true” doesn’t even be­gin to cover it: a mere 1/​10^25 would suffice there. Say­ing “This per­son be­lieves he could make one state­ment about an is­sue as difficult as the ori­gin of cel­lu­lar life per Planck in­ter­val, ev­ery Planck in­ter­val from the Big Bang to the pre­sent day, and not be wrong even once” only brings us to 1/​10^61 or so. If the chance of get­ting Ganser’s Syn­drome, the ex­traor­di­nar­ily rare psy­chi­a­tric con­di­tion that man­i­fests in a com­pul­sion to say false state­ments, is one in a hun­dred mil­lion, and the world’s top hun­dred thou­sand biol­o­gists all agree that evolu­tion is true, then this per­son should prefer­en­tially be­lieve it is more likely that all hun­dred thou­sand have si­mul­ta­neously come down with Ganser’s Syn­drome than that they are do­ing good biol­ogy5

This cre­ation­ist’s flaw wasn’t math­e­mat­i­cal; the math prob­a­bly does re­turn that num­ber. The flaw was con­fus­ing the in­ter­nal prob­a­bil­ity (that com­plex life would form com­pletely at ran­dom in a way that can be rep­re­sented with this par­tic­u­lar al­gorithm) with the ex­ter­nal prob­a­bil­ity (that life could form with­out God). He should have added a term rep­re­sent­ing the chance that his knock­down ar­gu­ment just didn’t ap­ply.

Fi­nally, con­sider the ques­tion of whether you can as­sign 100% cer­tainty to a math­e­mat­i­cal the­o­rem for which a proof ex­ists. Eliezer has already ex­am­ined this is­sue and come out against it (cit­ing as an ex­am­ple this story of Peter de Blanc’s). In fact, this is just the spe­cific case of differ­en­ti­at­ing in­ter­nal ver­sus ex­ter­nal prob­a­bil­ity when in­ter­nal prob­a­bil­ity is equal to 100%. Now your prob­a­bil­ity that the the­o­rem is false is en­tirely based on the prob­a­bil­ity that you’ve made some mis­take.

The many math­e­mat­i­cal proofs that were later over­turned provide prac­ti­cal jus­tifi­ca­tion for this mind­set.

This is not a fully gen­eral ar­gu­ment against giv­ing very high lev­els of con­fi­dence: very com­plex situ­a­tions and situ­a­tions with many ex­clu­sive pos­si­ble out­comes (like the lot­tery ex­am­ple) may still make it to the 1/​10^20 level, albeit prob­a­bly not the 1/​10^4478296. But in other sorts of cases, giv­ing a very high level of con­fi­dence re­quires a check that you’re not con­fus­ing the prob­a­bil­ity in­side one ar­gu­ment with the prob­a­bil­ity of the ques­tion as a whole.

Footnotes

1. Although tech­ni­cally we know we’re talk­ing about an in­cum­bent, who typ­i­cally has a much higher chance, around 90% in Congress.

2. A par­tic­u­larly de­vi­ous ob­jec­tion might be “What if the lot­tery com­mis­sioner, in a fit of poli­ti­cal cor­rect­ness, de­cides that “ev­ery­one is a win­ner” and splits the jack­pot a billion ways? If this would satisfy your crite­ria for “win­ning the lot­tery”, then this mere pos­si­bil­ity should in­deed move your prob­a­bil­ity up­ward. In fact, since there is prob­a­bly greater than a one in one billion chance of this hap­pen­ing, the ma­jor­ity of your prob­a­bil­ity for Bob win­ning the lot­tery should con­cen­trate here!

3. Gid­dings and Mangano then go on to re-prove the origi­nal “won’t cause an apoc­a­lypse” ar­gu­ment us­ing a more com­pli­cated method in­volv­ing white dwarf stars.

4. While search­ing cre­ation­ist web­sites for the half-re­mem­bered ar­gu­ment I was look­ing for, I found what may be my new fa­vorite quote: “Math­e­mat­i­ci­ans gen­er­ally agree that, statis­ti­cally, any odds be­yond 1 in 10 to the 50th have a zero prob­a­bil­ity of ever hap­pen­ing.”

5. I’m a lit­tle wor­ried that five years from now I’ll see this quoted on some cre­ation­ist web­site as an ac­tual ar­gu­ment.

• While search­ing cre­ation­ist web­sites for the half-re­mem­bered ar­gu­ment I was look­ing for, I found what may be my new fa­vorite quote: “Math­e­mat­i­ci­ans gen­er­ally agree that, statis­ti­cally, any odds be­yond 1 in 10 to the 50th have a zero prob­a­bil­ity of ever hap­pen­ing.”

That re­minds me of one of my favourites, from a pro-ab­sti­nence blog:

When you play with fire, there is a 5050 chance some­thing will go wrong, and nine times out of ten it does.

• In Terry Pratch­ett’s Disc­world se­ries, it is a law of nar­ra­tive causal­ity that 1 in a mil­lion chances work out 9 times out of 10. Some char­ac­ters once made a difficult thing they were at­tempt­ing ar­tifi­cially harder, to try to make the prob­a­bil­ity ex­actly 1 in a mil­lion and in­voke this trope.

• That’s pretty awe­some. (He’s already on my list of au­thors to read if I ever ac­quire an at­ten­tion span suffi­cient for nov­els.)

• It’s worth point­ing out that two of his books (Hog­father and Color of Magic) have been made in to movies. I’m not sure how hard they are to find, but I know NetFlix has at least one of them. I’ve only seen Hog­father, but I thought it was a pretty good adap­ta­tion of the book :)

• Pratch­ett is near the top of my to-read list, but I don’t know which book(s) to start with. Color of Magic was the first in the se­ries, but it doesn’t seem like the kind of se­ries that needs to be read in or­der. Mort, Hog­father, Wee Free Men, and Witches Abroad have all been men­tioned fa­vor­ably on LW, so maybe one of those? Recom­men­da­tions?

• I started with Color of Magic, but didn’t re­ally get into it much. It was fine writ­ing, but noth­ing very spe­cial. Then I read some later works and re­al­ised that he got much bet­ter. As there’s no rea­son to read them in or­der (as you say), this means that you prob­a­bly shouldn’t!

(My favourite is Night Watch, but I’ve still only read a few, so you should prob­a­bly ig­nore that.)

• This ques­tion comes up a lot! A fan has come up with a very sen­si­ble and helpful chart, in many lan­guages no less! http://​​www.ls­pace.org/​​books/​​read­ing-or­der-guides/​​

• There are more con­nec­tions be­tween the books than are laid out in that chart though. The Last Hero, for in­stance, fea­tures mem­bers of the Night Watch cast about as strongly as the Wizards cast, and other books have minor con­nec­tions to each other that are sim­ply in­con­ve­nient to draw out be­cause they’re far away from each other on the chart.

Rincewind’s sto­ries are pretty much all in the vein of fan­tasy novel satire, while later books tended more to­wards so­cial com­men­tary in a hu­morous fan­tasy set­ting, so they do end up be­ing a bit dis­con­nected from the books that come later in the se­ries.

• Thanks! (dis­tributed also to the other replies)

• This con­firms my vague feel­ing that Rincewind’s stuff is not par­tic­uarly well con­nected to the rest of Disc­world.

• I started with Color of Magic, but didn’t re­ally get into it much. It was fine writ­ing, but noth­ing very spe­cial. Then I read some later works and re­al­ised that he got much bet­ter.

I went to a talk by Pratch­ett and he pretty much ad­mit­ted the same thing. He sug­gested start­ing with book 6 or so. :)

• I’ve read all of them ex­cept the Tif­fany Ach­ing ones, and Night Watch is still my fa­vorite.

I think it’s bet­ter if you’re already well fa­mil­iar with the Night Watch books and the set­ting of Ankh Mor­pork be­fore you read it though.

• Read the Tif­fany Ach­ing ones. They’re not just for chil­dren, but es­pe­cially read them if you have or ever ex­pect to have chil­dren. Th­ese are the sto­ries on which baby ra­tio­nal­ists ought to be raised.

• I have read the first three since I left that com­ment (so all but I Shall Wear Mid­night,) and I thought they were, at least pretty good, as all the Disc­world books were, but as far as younger-read­ers’ Disc­world books go, I rate The Amaz­ing Mau­rice and His Ed­u­cated Ro­dents more highly.

• Same here. I never finished CoM, but be­came hooked af­ter pick­ing up Equal Rites.

• I started by read­ing a few from around the mid­dle in no par­tic­u­lar or­der (start­ing with Soul Mu­sic), then bought the whole se­ries and read them from the start. Read­ing them in the di­s­or­der is not much of a prob­lem, even books that are part of the same se­ries with the same char­ac­ters have sto­ries that stand up wholly on their own.

The se­ries:

The Rincewind se­ries: the first Disc­world books are in it, but it’s not the best; I’d recom­mend the oth­ers first. It’s prob­a­bly best to read the books in this se­ries in or­der.

The Witches se­ries: starts with Equal Rites, but start­ing with Wyrd Sisters is fine (Equal Rites is one of the early books, and not very heav­ily linked to the rest). I’d recom­mend read­ing Wyrd Sisters ⇒ Witches Abroad ⇒ Lords and Ladies etc. in or­der. Prob­a­bly my fa­vorite se­ries.

The city watch se­ries: starts with Guards! Guards!, I’d recom­mend read­ing them in or­der. A pretty good se­ries.

The Death se­ries: has sev­eral books, but they aren’t heav­ily linked to one an­other, ex­cept maybe to­wards the end (I’d recom­mend read­ing Soul Mu­sic be­fore Hog­father).

Stan­dalone books: Small Gods, Mov­ing Pic­tures, Pyra­mids … not part of any se­ries, but quite good.

• Moist von Lipvig—Go­ing Postal, Mak­ing Money. Don’t miss them.

Thief of Time (stan­dalone but loosely re­lated to the Death books) is a favourite of mine too.

• Do you ever go to movies?

• Once in a while.

• In my ex­pe­rience read­ing a (good) novel re­quires lit­tle, if any, more at­ten­tion than watch­ing a movie. I do read un­usu­ally quickly, but I hon­estly find it al­most eas­ier to be wrapped up in a good book than to be in­vested in a movie, es­pe­cially if it’s a book as good as one of Pratch­ett’s. You should definitely give him a try.

• One thing I find is that books re­quire a bit of effort to get into, whereas movies force them­selves upon you.

• I find al­most the re­verse. Movies seem to be sig­nifi­cantly more likely to have weird er­rors or other el­e­ments that break my sus­pen­sion of dis­be­lief, whereas in books the fact that I’m imag­in­ing most of the events al­lows me to kind of filter any­thing that seems too im­plau­si­ble into a more log­i­cal nar­ra­tive.

• In­ter­est­ing. I find it’s much eas­ier to sus­pend dis­be­lief and make ex­cuses for movies, since I know that they only have two hours to work for—it’s much eas­ier to con­vince my­self that the ex­pla­na­tion is cor­rect, and they just didn’t have time to go in to it on screen :)

• Try and do that with Rudy Rucker, I dare you. I only en­dured first thirty or so pages of his “Posts­in­gu­lar” be­fore all that was left of my sus­pen­sion of dis­be­lief were sad ashes and smoke started to come out of my ears.

EDIT: Although, to be fair, I haven’t tried his other books. I hear the ‘ware’ tril­ogy is quite good. I can’t shake off the dis­taste af­ter try­ing “Posts­in­gu­lar”, though.

• I would say this is true for en­gag­ing nov­els. This is not pre­cisely the same set as good nov­els, though there is cer­tainly much over­lap. Disc­world, I think, is even more rep­re­sen­ta­tive of the former set than the lat­ter, though, so it cer­tainly should ap­ply here—though no doubt the stick­i­ness varies from per­son to per­son.

• When you play with fire, there is a 5050 chance some­thing will go wrong, and nine times out of ten it does.

They are only ad­mit­ting their poor cal­ibra­tion.

• Heh.

Though, ad­mit­ting poor cal­ibra­tion that way is like say­ing “I in­cor­rectly be­lieve X to be true, its ac­tu­ally Y”.

• Note that some­one just gave a con­fi­dence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever hap­pen. This is pos­si­bly the most wrong any­one has ever been.

I was in some dis­cus­sion at SIAI once and made an es­ti­mate that ended up be­ing off by some­thing like three hun­dred trillion or­ders of mag­ni­tude. (Some­thing about gi­ant look-up ta­bles, but still.) Any­one outdo me?

• Wow. The worst I’ve ever done is giv­ing 9 or­ders of mag­ni­tude in­side my 90% con­fi­dence in­ter­val for the ve­loc­ity of the earth and be­ing wrong. (It turns out the earth doesn’t move faster than the speed of light!)

• Surely declar­ing “x is im­pos­si­ble”, be­fore wit­ness­ing x, would be the most wrong you could be?

• I take more is­sue with the peo­ple who in­cre­d­u­lously shout “That’s im­pos­si­ble!” af­ter wit­ness­ing x.

• I don’t. You can wit­ness a ma­gi­cian, e.g., vi­o­lat­ing con­ser­va­tion of mat­ter, and still de­clare “that’s im­pos­si­ble!”

Ba­si­cally, you’re stat­ing that you don’t be­lieve that the sig­nals your senses re­ported to you are ac­cu­rate.

• The col­lo­quial mean­ing of “x is im­pos­si­ble” is prob­a­bly closer to “x has prob­a­bil­ity <0.1%” than “x has prob­a­bil­ity 0”

• This is good, but I feel like we’d bet­ter rep­re­sent hu­man psy­chol­ogy if we said:

Most peo­ple don’t make a dis­tinc­tion be­tween the con­cepts of “x has prob­a­bil­ity <0.1%” and “x is im­pos­si­ble”.

I say this be­cause I think there’s an im­por­tant differ­ence be­tween the times when peo­ple have a pre­cise mean­ing in mind, which they’ve ex­pressed poorly, and the times when peo­ple’s ac­tual con­cepts are vague and fuzzy. (Often, peo­ple don’t re­al­ise how fuzzy their con­cepts are).

• Prob­a­bil­ity zero and im­pos­si­bil­ity are not ex­actly the same thing. A pos­si­ble event can have the prob­a­bil­ity 0. But an im­pos­si­ble event has the prob­a­bil­ity 0.

• You are refer­ring to the math­e­mat­i­cal defi­ni­tion of im­pos­si­bil­ity, and I am well aware of the fact that it is differ­ent from prob­a­bil­ity zero (flip­ping a coin for­ever with­out get­ting tails has prob­a­bil­ity zero but is not math­e­mat­i­cally im­pos­si­ble). My point is that nei­ther of those is ac­tu­ally what most peo­ple (as op­posed to math­e­mat­i­ci­ans and philoso­phers) mean by im­pos­si­ble.

• Prob­a­bil­ities of 1 and 0 are con­sid­ered rule vi­o­la­tions and dis­carded.

• Prob­a­bil­ities of 1 and 0 are con­sid­ered rule vi­o­la­tions and dis­carded.

What should we take for P(X|X) then?

And then what can I put you down for the prob­a­bil­ity that Bayes’ The­o­rem is ac­tu­ally false? (I mean the the­o­rem it­self, not any par­tic­u­lar de­ploy­ment of it in an ar­gu­ment.)

• What should we take for P(X|X) then?

The one that I con­fess is giv­ing me the most trou­ble is P(A|A). But I would pre­fer to call that a syn­tac­tic elimi­na­tion rule for prob­a­bil­is­tic rea­son­ing, or per­haps a set equal­ity be­tween events, rather than claiming that there’s some spe­cific propo­si­tion that has “Prob­a­bil­ity 1”.

and then

Huh, I must be slowed down be­cause it’s late at night… P(A|A) is the sim­plest case of all. P(x|y) is defined as P(x,y)/​P(y). P(A|A) is defined as P(A,A)/​P(A) = P(A)/​P(A) = 1. The ra­tio of these two prob­a­bil­ities may be 1, but I deny that there’s any ac­tual prob­a­bil­ity that’s equal to 1. P(|) is a mere no­ta­tional con­ve­nience, noth­ing more. Just be­cause we con­ven­tion­ally write this ra­tio us­ing a “P” sym­bol doesn’t make it a prob­a­bil­ity.

• Ah, thanks for the poin­ter. Some­one’s tried to an­swer the ques­tion about the re­li­a­bil­ity of Bayes’ The­o­rem it­self too I see. But I’m afraid I’m go­ing to have to pass on this, be­cause I don’t see how call­ing some­thing a syn­tac­tic elimi­na­tion rule in­stead a law of logic saves you from in­co­her­ence.

• I’d be in­ter­ested to hear your thoughts on why you be­lieve EY is in­co­her­ent? I thought that what EY said makes sense. Is the prob­a­bil­ity of a tau­tol­ogy be­ing true 1? You might think that it is true by defi­ni­tion, but what if the con­cept is not even wrong, can you ab­solutely rule out that pos­si­bil­ity? Your sense of truth by defi­ni­tion might be mis­taken in the same way as the ex­pe­rience of a Déjà vu. The ex­pe­rience is real, but you’re mis­taken about its sub­ject mat­ter. In other words, you might be mis­taken about your in­ter­nal co­her­ence and there­fore as­sign a prob­a­bil­ity to some­thing that was never there in the first place. This might be on-topic:

One can cer­tainly imag­ine an om­nipo­tent be­ing pro­vided that there is enough vague­ness in the con­cept of what “om­nipo­tence” means; but if one tries to nail this con­cept down pre­cisely, one gets hit by the om­nipo­tence para­dox.

Noth­ing has a prob­a­bil­ity of 1, in­clud­ing this sen­tence, as doubt always re­mains, or does it? It’s con­fus­ing for sure, some­one with enough in­tel­lec­tual horse­power should write a post on it.

• Did I ac­cuse some­one of be­ing in­co­her­ent? I didn’t mean to do that, I only meant to ac­cuse my­self of not be­ing able to fol­low the dis­tinc­tion be­tween a rule of logic (oh, take the Rule of De­tach­ment for in­stance) and a syn­tac­tic elimi­na­tion rule. In virtue of what do the lat­ter es­cape the quan­tum of scep­ti­cal doubt that we should ap­ply to other tau­tolo­gies? I think there clearly is a dis­tinc­tion be­tween be­liev­ing a rule of logic is re­li­able for a par­tic­u­lar do­main, and know­ing with the same con­fi­dence that a par­tic­u­lar in­stance of its ap­pli­ca­tion has been cor­rectly ex­e­cuted. But I can’t tell from the dis­cus­sion if that’s what’s at play here, or if it is, whether it’s be­ing de­ployed in a man­ner care­ful enough to avoid in­co­her­ence. I just can’t tell yet. For in­stance,

Con­di­tion­ing on this tiny cre­dence would pro­duce var­i­ous null im­pli­ca­tions in my rea­son­ing pro­cess, which end up be­ing dis­carded as incoherent

I don’t know what this amounts to with­out fol­low­ing a more de­tailed ex­am­ple.

It all seems to be some­what vaguely along the lines of what Hartry Field says in his Locke lec­tures about ra­tio­nal re­vis­abil­ity of the rules of logic and/​or epistemic prin­ci­ples; his ar­gu­ments are much more de­tailed, but I con­fess I have difficulty fol­low­ing him too.

• Althoug I’m not sure ex­actly what to say about it, there’s some kind of con­nec­tion here to Created Already in Mo­tion and The Be­drock of Fair­ness—in each case you have an in­finite regress of ask­ing for a log­i­cal ax­iom jus­tify­ing the ac­cep­tance of a log­i­cal ax­iom jus­tify­ing the ac­cep­tance of a log­i­cal ax­iom, ask­ing for fair treat­ment of peo­ple’s ideas of fair treat­ment of peo­ple’s ideas of fair treat­ment, or ask­ing for the prob­a­bil­ity that a prob­a­bil­ity of a ra­tio of prob­a­bil­ities be­ing cor­rect is cor­rect.

• Prob­a­bil­ities of 1 and 0 are con­sid­ered rule vi­o­la­tions and dis­carded.

Is the prob­a­bil­ity for the cor­rect­ness of this state­ment—smaller than 1?

• Obviously

• So, you say, it’s pos­si­ble it isn’t true?

• I would say that ac­cord­ing to my model (i.e. in­side the ar­gu­ment (in this post’s ter­minol­ogy)), it’s not pos­si­ble that that isn’t true, but that I as­sign greater than 0% cre­dence to the out­side-the-ar­gu­ment pos­si­bil­ity that I’m wrong about what’s pos­si­ble.

(A few rele­vant posts: How to Con­vince Me That 2 + 2 = 3; But There’s Still A Chance, Right?; The Fal­lacy of Gray)

• How to Con­vince Me That 2 + 2 = 3

You can think for a mo­ment, that 1024*10224=1048578. You can make an hon­est ar­ith­metic mis­take. More prob­a­ble for big­ger num­bers, less prob­a­ble for smaller. Very, very small for 2 + 2 and such. But I wouldn’t say it’s zero, and also not that the 0 is always ex­cluded with the prob­a­bil­ity 1.

Ex­clu­sion of 0 and 1 im­plies, that this ex­clu­sion is not 100% cer­tain. Kind of a prob­a­bil­is­tic modus tol­lens.

• it’s not pos­si­ble that that isn’t true

What is it that is true? (Just to clar­ify..)

• This:

Prob­a­bil­ities of 1 and 0 are con­sid­ered rule vi­o­la­tions and dis­carded.

Dis­card­ing 0 and 1 from the game im­plies, that we have a pos­i­tive prob­a­bil­ity—that they are wrongly ex­cluded.

• Indeed

I get quite an­noyed when this is treated as a re­fu­ta­tion of the ar­gu­ment that ab­solute truth doesn’t ex­ist. Ac­knowl­edg­ing that there is some chance that a po­si­tion is false does not dis­prove it, any more than the fact that you might win the lot­tery means that you will.

• Some­one claiming that ab­solute truths don’t ex­ist has no right to be ab­solutely cer­tain of his own claim. This of course has no bear­ing on the ac­tual truth of his claim, nor the truth of the sup­posed ab­solute truth he’s try­ing to re­fute by a fully generic ar­gu­ment against ab­solute truths.

I rather pre­fer Eliezer’s ver­sion, that con­fi­dence of 2^n to 1, re­quires [n—log base 2 of prior odds] bits of ev­i­dence to be jus­tified. Not only does this es­sen­tially for­bid ab­solute cer­tainty (you’d need in­finite ev­i­dence to jus­tify ab­solute cer­tainty), but it is ac­tu­ally use­ful for real life.

• That’s quite a lot. Can you tell us what the es­ti­mate was?

• Well there are billions of peo­ple who be­lieve things with p=1… things like “God ex­ists.”

• Wow. Elimi­nat­ing all “zero” prob­a­bil­ity es­ti­mates as ille­gal un­der the game rules, it’s pos­si­ble that you sin­gle­hand­edly dragged down the av­er­age Bayesian score of the hu­man species by a no­tice­able decre­ment.

• I’m a bit irked by the con­tinued per­sis­tence of “LHC might de­stroy the world” noise. Given no ev­i­dence, the prior prob­a­bil­ity that micro­scopic black holes can form at all, across all pos­si­ble sys­tems of physics, is ex­tremely small. The same the­ory (String The­ory[1]) that has led us to sug­gest that micro­scopic black holes might form at all is also quite adamant that all black holes evap­o­rate, and equally adamant that micro­scopic ones evap­o­rate faster than larger ones by a pre­cise fac­tor of the mass ra­tio cubed. If we think the the­ory is talk­ing com­plete non­sense, then the pos­te­rior prob­a­bil­ity of an LHC dis­aster goes down, be­cause we fa­vor the ig­no­rant prior of a uni­verse where micro­scopic black holes don’t ex­ist at all.

Thus, the “LHC might de­stroy the world” noise boils down to the pos­si­bil­ity that (A) there is some math­e­mat­i­cally con­sis­tent post-GR, micro­scopic-black-hole-pre­dict­ing the­ory that has mas­sively slower evap­o­ra­tion, (B) this un­named and pos­si­bly non-ex­is­tent the­ory is less Kol­mogorov-com­plex and hence more pos­te­rior-prob­a­ble than the one that sci­en­tists are cur­rently us­ing[2], and (C) sci­en­tists have com­pletely over­looked this un­named and pos­si­bly non-ex­is­tent the­ory for decades, strongly sug­gest­ing that it has a large Leven­shtein dis­tance from the cur­rently fa­vored the­ory. The si­mul­ta­neous satis­fac­tion of these three crite­ria seems… pretty f-ing un­likely, since each tends to re­ject the oth­ers. A/​B: it’s hard to imag­ine a the­ory that pre­dicts post-GR physics with LHC-scale micro­scopic black holes that’s more Kol­mogorov-sim­ple than String The­ory, which can ac­tu­ally be speci­fied pretty damn com­pactly. B/​C: peo­ple already have ex­plored the Kol­mogorov-sim­ple space of post-New­to­nian the­o­ries pretty heav­ily, and even the sim­ple post-GR the­o­ries are pretty well ex­plored, mak­ing it un­likely that even a the­ory with large edit dis­tance from ei­ther ST or SM+GR has been over­looked. C/​A: it seems like a hell of a co­in­ci­dence that a large-edit-dis­tance the­ory, i.e. one ex­tremely dis­similar to ST, would just hap­pen to also pre­dict the for­ma­tion of LHC-scale micro­scopic black holes, then go on to pre­dict that they’re sta­ble on the or­der of hours or more by throw­ing out the mass-cubed rule[3], then go on to ex­plain why we don’t see them by the billions de­spite their claimed sta­bil­ity. (If the ones from cos­mic rays are so fast that the re­sult­ing black holes zip through Earth, why haven’t they eaten Jupiter, the Sun, or other nearby stars yet? Bom­bard­ment by cos­mic rays is not unique to Earth, and there are plenty of ce­les­tial bod­ies that would be heavy enough to cap­ture the prod­ucts.)

[1] It’s worth not­ing that our best the­ory, the Stan­dard Model with Gen­eral Rel­a­tivity, does not pre­dict micro­scopic black holes at LHC en­er­gies. Only String The­ory does: ST’s 11-di­men­sional com­pactified space is sup­posed to sud­denly de­com­pactify at high en­ergy scales, mak­ing grav­ity much more pow­er­ful at small scales than GR pre­dicts, thus al­low­ing black hole for­ma­tion at ab­nor­mally low en­er­gies, i.e. those ac­cessible to LHC. And naked GR (minus the SM) doesn’t pre­dict micro­scopic black holes. At all. In­stead, naked GR only pre­dicts su­per­nova-sized black holes and larger.

[2] The biggest pain of SM+GR is that, even though we’re pretty damn sure that that train wreck can’t be right, we haven’t been able to find any dis­con­firm­ing data that would lead the way to a bet­ter the­ory. This means that, if the cor­rect the­ory were more Kol­mogorov-com­plex than SM+GR, then we would still be forced as ra­tio­nal­ists to trust SM+GR over the cor­rect the­ory, be­cause there wouldn’t be enough Bayesian ev­i­dence to dis­crim­i­nate the com­plex-but-cor­rect the­ory from the countless com­plex-but-wrong the­o­ries. Thus, if we are to be con­vinced by some al­ter­na­tive to SM+GR, ei­ther that al­ter­na­tive must be Kol­mogorov-sim­pler (like String The­ory, if that pans out), or that al­ter­na­tive must sug­gest a clear ex­per­i­ment that leads to a di­rect dis­con­fir­ma­tion of SM+GR. (The more-com­plex al­ter­na­tive must also some­how at­tract our at­ten­tion, and also hint that it’s worth our time to calcu­late what the clear ex­per­i­ment would be. Sim­ple the­o­ries get eye­balls, but there are lots of more-com­plex the­o­ries that we never bother to pon­der be­cause that solu­tion-space doesn’t look like it’s worth our time.)

[3] Even if they were sta­ble on the or­der of sec­onds to min­utes, they wouldn’t de­stroy the Earth: the re­sult­ing black holes would be smaller than an atom, in fact smaller than a pro­ton, and since atoms are mostly empty space the black hole would sail through atoms with low prob­a­bil­ity of col­li­sion. I re­call that some­one fa­mil­iar with the physics did the math and calcu­lated that an LHC-sized black hole could swing like a pen­du­lum through the Earth at least a hun­dred times be­fore gob­bling up even a sin­gle pro­ton, and the same calcu­la­tion showed it would take over 100 years be­fore the black hole grew large enough to start col­laps­ing the Earth due to tidal forces, as­sum­ing zero evap­o­ra­tion. Keep in mind that the rele­vant com­pu­ta­tion, t = (5120 × π × G^2 × M^3) ÷ (ℏ × c^4), shows that a 1-sec­ond evap­o­ra­tion time is equal to 2.28e8 grams[3a] i.e. 250 tons, and the re­sult­ing ra­dius is r = 2 × G × M ÷ c^2 is 3.39e-22 me­ters[3b], or about 0.4 mil­lionths of a pro­ton ra­dius[3c]. That one-sec­ond-du­ra­tion black hole, de­spite be­ing tiny, is vastly larger than the ones that might be cre­ated by LHC -- 10^28 larger by mass, in fact[3d]. (FWIW, the Sch­warzschild ra­dius calcu­la­tion re­lies only on GR, with no quan­tum stuff, while the time-to-evap­o­rate calcu­la­tion de­pends on some ba­sic QM as well. String The­ory and the Stan­dard Model both leave that par­tic­u­lar bit of QM un­touched.)

[3a] Google Calcu­la­tor: “(((1 s) h c^4) /​ (2pi 5120pi G^2)) ^ (1/​3) in grams”

[3b] Google Calcu­la­tor: “2 G 2.28e8 grams /​ c^2 in me­ters”

[3c] Google Calcu­la­tor: “3.3856695e-22 m /​ 0.8768 fem­tome­ters”, where 0.8768 fem­tome­ters is the ex­per­i­men­tally ac­cepted charge ra­dius of a proton

[3d] Google Calcu­la­tor: “(2.28e8 g * c^2) /​ 14 TeV”, where 14 TeV is the LHC’s max­i­mum en­ergy (7 TeV per beam in a head-on pro­ton-pro­ton col­li­sion)

• I won­der how the anti-LHC ar­gu­ments on this site might look if we sub­sti­tute cryp­tog­ra­phy for the LHC. Math­e­mat­i­ci­ans might say the idea of math­e­mat­ics de­stroy­ing the world is ridicu­lous, but af­ter all we have to trust that all math­e­mat­i­ci­ans an­nounc­ing opinions on the sub­ject are sane, and we know the num­ber of in­sane math­e­mat­i­ci­ans in gen­eral is greater than zero. And any­way, their ar­gu­ments would (al­most) cer­tainly in­volve as­sum­ing the prob­a­bil­ity of math­e­mat­ics de­stroy­ing the world is 0, so should ob­vi­ously be dis­re­garded. Thus, the dan­ger of run­ning OpenSSH needs to be calcu­lated as an ex­is­ten­tial risk tak­ing in our fu­ture pos­si­ble light cone. (Though hand­ily, this would be a spec­tac­u­lar tour de force against DRM.) For an en­core, we need some­one to calcu­late the ex­is­ten­tial risk of get­ting up in the morn­ing to go to work. Also, did switch­ing on the LHC send back tachyons to cause 9/​11? I think we need to be told.

• if the cor­rect the­ory were more Kol­mogorov-com­plex than SM+GR, then we would still be forced as ra­tio­nal­ists to trust SM+GR over the cor­rect the­ory, be­cause there wouldn’t be enough Bayesian ev­i­dence to dis­crim­i­nate the com­plex-but-cor­rect the­ory from the countless com­plex-but-wrong the­o­ries.

I re­ject Solomonoff in­duc­tion as the cor­rect tech­ni­cal for­mu­la­tion of Oc­cam’s ra­zor, and as an ad­e­quate foun­da­tion for Bayesian episte­mol­ogy.

• Look­ing back over an­cient posts, I saw this. I up­voted it ear­lier, and am leav­ing that, but I’d like to quib­ble with one thing:

this un­named and pos­si­bly non-ex­is­tent the­ory is less Kol­mogorov-com­plex and hence more pos­te­rior-prob­a­ble than the one that sci­en­tists are cur­rently using

I think the big­ger is­sue would be ‘this un­named and pos­si­bly non-ex­is­tent the­ory is an ac­cu­rate de­scrip­tion of re­al­ity’. If it’s more Kol­mogorov-com­plex, so be it, that’s the uni­verse’s pre­rog­a­tive. In­creas­ing the Kol­mogorov com­plex­ity de­creases only our prior for it; it won’t change whether it is the case.

• One might be tempted to re­spond “But there’s an equal chance that the false model is too high, ver­sus that it is too low.”

I’m not sure why one might be tempted to make this re­sponse. Is the idea that, when mak­ing any calcu­la­tion at all, one is equally likely to get a num­ber that is too big as one that is too small? But then, that’s be­fore you have looked at the num­ber.

Yet an­other counter-re­sponse is that even if the re­sponse were true, the false model could be much too high, but it can only be slightly too low, since 1-10^-9 is quite close to 1.

• Added to Ab­solute cer­tainty LW wiki page.

• First, great post. Se­cond, gen­eral in­junc­tions against giv­ing very low prob­a­bil­ities to things seems to be taken by many ca­sual read­ers as en­dorse­ments of the (bad) be­hav­ior “priv­ilege the hy­poth­e­sis”—e.g. mov­ing the prob­a­bil­ity from very small to mod­er­ately small that God ex­ists. That’s not right, but I don’t have ex­cel­lent ar­gu­ments for why it’s not right. I’d love it if you wrote an ar­ti­cle on choos­ing good pri­ors.

Cosma Shal­izi has done some tech­ni­cal work that seems (to my in­com­pe­tent eye) to be rele­vant:

http://​​pro­jecteu­clid.org/​​DPubS?verb=Dis­play&ver­sion=1.0&ser­vice=UI&han­dle=eu­clid.ejs/​​1256822130&page=record

That is, he takes Bayesian up­dat­ing, which re­quires mod­el­ing the world, and an­swers the ques­tion ‘when would it be okay to use Bayesian up­dat­ing, even though we know the model is definitely wrong—e.g. too sim­ple?’. (Of course, mak­ing your model “not ob­vi­ously wrong” by adding com­plex­ity isn’t a solu­tion.)

• I am still con­fused about how small the prob­a­bil­ity I should use in the God ques­tion is. I un­der­stand the ar­gu­ment about priv­ileg­ing the hy­poth­e­sis and about in­tel­li­gent be­ings be­ing very com­plex and fan­tas­ti­cally un­likely.

But I also feel that if I tried to use an ar­gu­ment at least that sub­tle, when ap­plied to some­thing I am at least as con­fused about as how on­tolog­i­cally com­plex a first cause should be, to dis­prove things at least as widely be­lieved as re­li­gion, a mil­lion times, I would be wrong at least once.

• But I also feel that if I tried to use an ar­gu­ment at least that sub­tle, when ap­plied to some­thing I am at least as con­fused about as how on­tolog­i­cally com­plex a first cause should be, to dis­prove things at least as widely be­lieved as re­li­gion, a mil­lion times, I would be wrong at least once.

See Ad­vanc­ing Cer­tainty. The fact that this state­ment sounds com­fortably mod­est does not ex­empt it from the scrutiny of the Fun­da­men­tal Ques­tion of Ra­tion­al­ity (why do you be­lieve what you be­lieve?). I re­spect­fully sub­mit that if the an­swer is “be­cause I have been wrong be­fore, where I was equally con­fi­dent, in pre­vi­ous eras of my life when I wasn’t us­ing ar­gu­ments this pow­er­ful (they just felt pow­er­ful to me at the time)”, that doesn’t suffice—for the same rea­son that the Lord Kelvin ar­gu­ment doesn’t suffice to show that ar­gu­ments from physics can’t be trusted (un­less you don’t think physics has learned any­thing since Kelvin).

• I’ve got to ad­mit I dis­agree with a lot of Ad­vanc­ing Cer­tainty. The proper refer­ence class for a mod­ern physi­cist who is well ac­quainted with the mis­takes of Lord Kelvin and won’t do them again is “past sci­en­tists who were well ac­quainted with the mis­takes of their pre­de­ces­sors and plan not to do them again”, which I imag­ine has less than a hun­dred per­cent suc­cess rate and which might have in­cluded Kelvin.

It would be a use­ful ex­er­cise to see whether the most ra­tio­nal physi­cists of 1950 have more suc­cess­ful pre­dic­tions as of 2000 than the most ra­tio­nal physi­cists of 1850 did as of 1900. It wouldn’t sur­prise me if this were true, and so, then the physi­cists of 2000 could justly put them­selves in a new refer­ence class and guess they will be even more suc­cess­ful as of 2050 than the 1950ers were in 2000. But if the suc­cess rate af­ter fifty years re­mains con­stant, I wouldn’t want to say “Yeah, well , we’ve prob­a­bly solved all those prob­lems now, so we’ll do bet­ter”.

• I’ve got to ad­mit I dis­agree with a lot of Ad­vanc­ing Certainty

Do you ac­tu­ally dis­agree with any par­tic­u­lar claim in Ad­vanc­ing Cer­tainty, or does it just seem “off” to you in its em­pha­sis? Be­cause when I read your post, I felt my­self “dis­agree­ing” (and pan­ick­ing at the rapid up­vot­ing), but re­flec­tion re­vealed that I was re­ally hav­ing some­thing more like an ADBOC re­ac­tion. It felt to me that the in­tent of your post was to say “Boo con­fi­dent prob­a­bil­ities!”, while I tend to be on the side of “Yay con­fi­dent prob­a­bil­ities!”—not be­cause I’m in fa­vor of over­con­fi­dence, but rather be­cause I think many wor­ries about over­con­fi­dence here tend to be ill-founded (I sup­pose I’m some­thing of a third-lev­eler on this is­sue.)

And in­deed, when you see peo­ple com­plain­ing about over­con­fi­dence on LW, it’s not usu­ally be­cause some­one thinks that some poli­ti­cal can­di­date has a 0.999999999 chance of win­ning an elec­tion; al­most no­body here would think that a rea­son­able es­ti­mate. In­stead, what you get is peo­ple say­ing that 0.0000000001 is too low a prob­a­bil­ity that God ex­ists—on the ba­sis of noth­ing else than gen­eral worry about hu­man over­con­fi­dence.

I think my anti-anti-over­con­fi­dence vigilance started when I re­al­ized I had been so­cially in­timi­dated into back­ing off from my es­ti­mate of 0.001 in the Amanda Knox case, when in fact that was and re­mains an en­tirely rea­son­able num­ber given my de­tailed knowl­edge of the case. The mis­take I made was to pre­sent this num­ber as if it were some­thing that par­ti­ci­pants in my sur­vey should have ar­rived at from a few min­utes of read­ing. Those states—the ones that sur­vey par­ti­ci­pants were in, with refer­ence classes like “highly con­tro­ver­sial con­vic­tion with very plau­si­ble defense ar­gu­ments”—are what prob­a­bil­ities like 0.1 or 0.3 are for. My state, on the other hand, was more like “highly con­fi­dent in­side-view con­clu­sion bolstered by LW sur­vey re­sults de­ci­sively on the same side of 50%”.

But this isn’t what the over­con­fi­dence-hawks ar­gued. What they said, in essence, was that 0.001 was just some­how “in­her­ently” too con­fi­dent. Only “ir­ra­tional” peo­ple wear the at­tire of “P(X) = 0.001”; We Here, by con­trast, are Aware Of Bi­ases Like Over­con­fi­dence, and only give Mea­sured, Calm, Rea­son­able Prob­a­bil­ities.

That is the mis­take I want to fight, now that I have the courage to do so. Though I can’t find much to liter­ally dis­agree about in your post, it un­for­tu­nately feels to me like am­mu­ni­tion for the en­emy.

• I definitely did have the “am­mu­ni­tion for the en­emy” feel­ing about your post, and the “be­lief at­tire” point is a good one, but I think the broad emo­tional dis­agree­ment does ex­press it­self in a few spe­cific claims:

1. Even if you were to con­trol for get­ting tired and hun­gry and so on, even if you were to load your in­tel­li­gence into a com­puter and have it do the hard work, I still don’t think you could judge a thou­sand such tri­als and be wrong only once. I ad­mit this may not be as real a dis­agree­ment as I’m think­ing, be­cause it may be a con­fu­sion on what sort of refer­ence class we should use to pick tri­als for you.

2. I think we might dis­agree on the Lord Kelvin claim. I think I would pre­dict more of to­day’s phys­i­cal the­o­ries are wrong than you would.

3. I think my prob­a­bil­ity that God ex­ists would be sev­eral or­ders of mag­ni­tude higher than yours, even though I think you prob­a­bly know about the same num­ber of good ar­gu­ments on the is­sue as I do.

Maybe our dis­agree­ment can be re­solved em­piri­cally—if we were to do enough prob­lems where we gave con­fi­dence lev­els on ques­tions like “The area of Canada is greater than the area of the Med­iter­ranean Sea” and use log odds scor­ing we might find one of us do­ing sig­nifi­cantly bet­ter than the other—al­though we would have to do quite a few to close off my pos­si­ble ar­gu­ment that we just didn’t hit that one “black swan” ques­tion on which you’d say you’re one in a mil­lion con­fi­dent and then get it wrong. Would you agree that this would get to the heart of our dis­agree­ment, or do you think it re­volves solely around more con­fus­ing philo­soph­i­cal ques­tions?

(I took a test like that yes­ter­day to test some­thing and I came out over­con­fi­dent, miss­ing 210 ques­tions at the 96% prob­a­bil­ity level. I don’t know how that trans­lates to more real-world ques­tions and higher con­fi­dence lev­els, but it sure makes me re­luc­tant to say I’m chron­i­cally un­der­con­fi­dent)

• I still don’t think you could judge a thou­sand such tri­als and be wrong only once.

When I first saw this, I agreed with it. But now I don’t, partly be­cause of the story (which I don’t have a link to, but it was linked to from LW some­where) about some­one who would bet they knew whether or not a num­ber was a prime. This con­tinued un­til they made a mis­take (do­ing it men­tally), and then they lost.

If they had a calcu­la­tor, could they go up to the 1000th odd num­ber and be wrong at most once? I’m pretty sure they could, ac­tu­ally. And so the ques­tion isn’t “can you judge 1000 tri­als and only get one wrong?” but “can you judge 1000 ob­vi­ous tri­als and only get one wrong?”, or, more ap­pro­pri­ately, “can you judge 1000 tri­als as ei­ther ‘ob­vi­ous’ and ‘con­tested’ and only be wrong at most once?”. Be­cause origi­nally I was imag­in­ing be­ing a nor­mal trial judge- but a nor­mal trial judge has to deal with difficult cases. Ones like the Amanda Knox case (are/​should be) rare. I’m pretty con­fi­dent that once you put in a rea­son­able amount of effort (how­ever much kom­pon­isto did for this case), you can tell whether or not the case is one you can be con­fi­dent about or one you can’t, as­sum­ing you’re care­fully think­ing about what would make them not open-and-shut cases.

• 17 Dec 2010 22:23 UTC
5 points

There are re­ally two claims here. The first one—that if some guy on the In­ter­net has a model pre­dict­ing X with 99.99% cer­tainty, then you should as­sign less prob­a­bil­ity to X, ab­sent other ev­i­dence—seems in­ter­est­ing, but rel­a­tively easy to ac­cept. I’m pretty sure I’ve been rea­son­ing this way in the past.

The sec­ond claim is ex­actly the same, but ap­plied to one­self. “If I have come up with an ar­gu­ment that pre­dicts X with 99.99% cer­tainty, I should be less than 99.99% cer­tain of X.” This is not some­thing that peo­ple do by de­fault. I doubt that I do it un­less prompted. Great post!

Stylis­tic nit­pick, though: things like “999,999,999 in a billion” are tricky to parse, es­pe­cially when com­pared to “999,999,999,999 in a trillion” (which I ini­tially read as ap­prox­i­mately 1 in 1000 be­fore count­ing the 9s) or “1/​1 billion”. Count­ing the 9s is part of the prob­lem, the other that the nu­mer­a­tor is a num­ber and the de­nom­i­na­tor is a word. What’s wrong with writ­ing 99.99% and 99.9999%? Th­ese are differ­ent from the origi­nal val­ues in the post, but still carry the ar­gu­ment, and are eas­ier to read.

• I per­son­ally find the best way to deal with such num­bers is to talk about nines.

999,999,999 in a billion=99.9 999 999%= 9 nines

999,999,999,999 in a trillion=99.9 999 999 999%= 12 nines

• This raises the ques­tion: Should sci­en­tific jour­nals ad­just the p-value that they re­quire from an ex­per­i­ment, to be no larger than the prob­a­bil­ity (found em­piri­cally) that a peer-re­viewed ar­ti­cle con­tains a fac­tual, log­i­cal, method­olog­i­cal, ex­per­i­men­tal, or ty­po­graph­i­cal er­ror?

• The meta-sci­ence part would change with time, e.g. how many peo­ple read the ar­ti­cle and found no mis­takes. Doesn’t seem to mix well with a fixed re­sult.

Maybe some sep­a­rate, on­line thing that just re­ported on the prob­a­bil­ity of claims could han­dle the meta-sci­ence.

• This is not a fully gen­eral ar­gu­ment against giv­ing very high lev­els of con­fi­dence:

It seems to me we can use the very high con­fi­dence lev­els and our un­der­stand­ing of the area in ques­tion to jus­tify ig­nor­ing, heav­ily dis­count­ing, or ac­cept­ing the ar­gu­ments. We can do this on the ba­sis that it takes a cer­tain amount of ev­i­dence to ac­tu­ally pro­duce ac­cu­rate be­liefs.

In the case of the cre­ation­ist ar­gu­ment, a con­fi­dence level of 10^4,478,296 to 1 re­quires (re­ally) roughly 12,000,000 bits of ev­i­dence. (10^4,000,000 =~ 2^12,000,000). The cre­ation­ist pre­sents these twelve mil­lion bits in the form of ob­ser­va­tions about cells. Now, us­ing our knowl­edge of biol­ogy and cells (speci­fi­cally, their self-as­sem­bling na­ture, re­stric­tions on which pro­teins can com­bine, per­sis­tence and re­pro­duc­tion) we can con­fi­dently say that ob­ser­va­tions of cells do not provide 12,000,000 bits of ev­i­dence.

I’m not knowl­edge­able about biol­ogy, so I can’t say how many bits of ev­i­dence for a cre­ator they provide in this man­ner but I gather it’s not many. We then ad­just the ar­gu­ment’s strength down to that many bits of ev­i­dence. In effect, we are dis­count­ing the cre­ation­ist ar­gu­ment for a lack of un­der­stand­ing, and dis­count­ing it by ex­actly how much it lacks un­der­stand­ing.

Ap­ply­ing this to the LHC ar­gu­ment: the ar­gu­ment speci­fies odds of 10^25 to 1, and the ev­i­dence is in the form of cos­mic ray in­ter­ac­tions not de­stroy­ing the world. Based on our un­der­stand­ing of the physics in­volved (in­clud­ing our un­der­stand­ing of the re­sults of Gid­dings and Mangano), we can say that cos­mic ray in­ter­ac­tions don’t provide quite as much ev­i­dence as the ar­gu­ment claims—but they provide most of the ev­i­dence they claimed to (even if we have to re­sort to our knowl­edge about white dwarf stars).

I think we should pre­fer to down­grade the ar­gu­ment from our knowl­edge about the rele­vant area rather than hal­lu­ci­na­tions or sim­ple er­ror, be­cause the prior for ‘our un­der­stand­ing is not com­plete’ is higher than ‘hal­lu­ci­nat­ing’ and ‘sim­ple er­ror’ put to­gether—and, to put it bluntly, in the so­cial pro­cess of be­liefs and ar­gu­ments, most peo­ple are ca­pa­ble of com­pletely dis­miss­ing ar­gu­ments from hal­lu­ci­na­tion and sim­ple er­ror, but are far less ca­pa­ble of dis­miss­ing ar­gu­ments from in­com­plete knowl­edge.

As for in­side /​ out­side the ar­gu­ment, I found it helpful while read­ing the post to think of the out­side view as a prob­a­bil­ity mass split be­tween A and ~A, and then in­side the ar­gu­ment tells us how much prob­a­bil­ity mass the ar­gu­ment steals for its side. This made it in­tu­itive, in that if I en­coun­tered an ar­gu­ment that boasted of steal­ing all the prob­a­bil­ity mass for one side, and I could still con­ceive of the other side hav­ing some prob­a­bil­ity mass left over, I should dis­trust that ar­gu­ment.

• I don’t think the lot­tery is an ex­cep­tion. There’s a chance that you mis­heard and they said “mil­lion”, not “billion”.

• This was pre­dictable: this was a sim­ple ar­gu­ment in a com­plex area try­ing to prove a nega­tive, and it would have been pre­sump­tous to be­lieve with greater than 99% prob­a­bil­ity that it was flawless. If you can only give 99% prob­a­bil­ity to the ar­gu­ment be­ing sound, then it can only re­duce your prob­a­bil­ity in the con­clu­sion by a fac­tor of a hun­dred, not a fac­tor of 10^20.

As I re­call, there was a pa­per in 2008 or 2009 about the LHC prob­lem which con­cluded effec­tively that the tiny er­rors that an anal­y­sis was in­cor­rectly car­ried out cu­mu­la­tively put a high floor on what small risk we could con­clude the LHC posed.

Un­for­tu­nately, I can’t seem to re­find it to see whether it’s a bet­ter ver­sion of this ar­gu­ment, so per­haps some­one else re­mem­bers speci­fics.

• Looks like it, thanks:

‘Some risks have ex­tremely high stakes. For ex­am­ple, a wor­ld­wide pan­demic or as­ter­oid im­pact could po­ten­tially kill more than a billion peo­ple. Com­fort­ingly, sci­en­tific calcu­la­tions of­ten put very low prob­a­bil­ities on the oc­cur­rence of such catas­tro­phes. In this pa­per, we ar­gue that there are im­por­tant new method­olog­i­cal prob­lems which arise when as­sess­ing global catas­trophic risks and we fo­cus on a prob­lem re­gard­ing prob­a­bil­ity es­ti­ma­tion. When an ex­pert pro­vides a calcu­la­tion of the prob­a­bil­ity of an out­come, they are re­ally pro­vid­ing the prob­a­bil­ity of the out­come oc­cur­ring, given that their ar­gu­ment is wa­ter­tight. How­ever, their ar­gu­ment may fail for a num­ber of rea­sons such as a flaw in the un­der­ly­ing the­ory, a flaw in the mod­el­ing of the prob­lem, or a mis­take in the calcu­la­tions. If the prob­a­bil­ity es­ti­mate given by an ar­gu­ment is dwar­fed by the chance that the ar­gu­ment it­self is flawed, then the es­ti­mate is sus­pect. We de­velop this idea for­mally, ex­plain­ing how it differs from the re­lated dis­tinc­tions of model and pa­ram­e­ter un­cer­tainty. Us­ing the risk es­ti­mates from the Large Hadron Col­lider as a test case, we show how se­ri­ous the prob­lem can be when it comes to catas­trophic risks and how best to ad­dress it.’

• But my ex­ter­nal level of con­fi­dence should be lower, even if the model is my only ev­i­dence, by an amount pro­por­tional to my trust in the model.

Only to the ex­tent you didn’t trust in the state­ment other than be­cause this model says it’s prob­a­bly true. It could be that you already be­lieve in the state­ment strongly, and so your ex­ter­nal level of con­fi­dence should be higher than the model sug­gests, or the same, etc. Closer to the prior, in other words, and on strange ques­tions in­tu­itive pri­ors can be quite ex­treme.

• Another vot­ing ex­am­ple; “Com­mon sense and statis­tics”, An­drew Gel­man:

A pa­per* was pub­lished in a poli­ti­cal sci­ence jour­nal giv­ing the prob­a­bil­ity of a tied vote in a pres­i­den­tial elec­tion as some­thing like 10^-90**. Talk about in­nu­mer­acy! The calcu­la­tion, of course (I say “of course” be­cause if you are a statis­ti­cian you will likely know what is com­ing) was based on the bino­mial dis­tri­bu­tion with known P. For ex­am­ple, Obama got some­thing like 52% of the vote, so if you take n=130 mil­lion and P=0.52 and figure out the prob­a­bil­ity of an ex­act tie, you can work out the for­mula etc etc.

From em­piri­cal grounds that 10^-90 thing is lu­dicrous. You can eas­ily get an or­der-of-mag­ni­tude es­ti­mate by look­ing at the em­piri­cal prob­a­bil­ity, based on re­cent elec­tions, that the vote mar­gin will be within 2 mil­lion votes (say) and then di­vid­ing by 2 mil­lion to get the prob­a­bil­ity of it be­ing a tie or one vote from a tie.

The funny thing—and I think this is a case for var­i­ous bad num­bers that get out there—is that this 10^-90 has no in­tu­ition be­hind it, it’s just the product of a mind­lessly ap­plied for­mula (be­cause ev­ery­one “knows” that you use the bino­mial dis­tri­bu­tion to calcu­late the prob­a­bil­ity of k heads in n coin flips). But it’s bad in­tu­ition that al­lows peo­ple to ac­cept that num­ber with­out scream­ing. A se­ri­ous poli­ti­cal sci­ence jour­nal wouldn’t ac­cept a claim that there were 10^90 peo­ple in some ob­scure coun­try, or that some per­son was 10^90 feet tall.

...To con­tinue with the Gigeren­zer idea [of turn­ing prob­a­bil­ities into fre­quen­cies], one way to get a grip on the prob­a­bil­ity of a tied elec­tion is to ask a ques­tion like, what is the prob­a­bil­ity that an elec­tion is de­ter­mined by less than 100,000 votes in a de­ci­sive state. That’s hap­pened at least once. (In 2000, Gore won Florida by only 20-30,000 votes.***) The prob­a­bil­ity of an ex­act tie is of the or­der of mag­ni­tude of 10^(-5) times the prob­a­bil­ity of an elec­tion be­ing de­cided by less than 100,000 votes...Re­cent na­tional elec­tions is too small a sam­ple to get a pre­cise es­ti­mate, but it’s enough to make it clear that an es­ti­mate such as 10^-90 is hope­lessly in­nu­mer­ate.

1. * “Is it Ra­tional to Vote? Five Types of An­swer and a Sugges­tion”, Dowd­ing 2005; ful­l­text: https://​​pdf.yt/​​d/​​5veaHe6F5j-k6oNQ /​​ https://​​www.drop­box.com/​​s/​​fxgfa04hmpfntgh/​​2005-dowd­ing.pdf /​​ http://​​lib­gen.org/​​sci­mag/​​get.php?doi=10.1111%2Fj.1467-856x.2005.00188.x

2. ** 1/​1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 ; or to put it in con­text, ‘in­side’ the ar­gu­ment, the claim is that you could hold a pres­i­den­tial elec­tion for ev­ery atom in the uni­verse, and still not ever have a can­di­date win by one vote

3. *** From the com­ments:

It hap­pened five or six times in 2000 alone, de­pend­ing on how you think about elec­toral–col­lege ties. http://​​en.wikipe­dia.org/​​wiki/​​United_States_pres­i­den­tial_elec­tion,_2000#Votes_by_state.

• What leads you to con­clude that the chance of a vote mar­gin of 1 is any­where near 1/​X of the chance of a vote mar­gin of X? That’s not ob­vi­ous, and your quote doesn’t try to de­rive it.

• The easy-but-not-very-rigor­ous method is to use the prin­ci­ple of in­differ­ence, since there’s no par­tic­u­lar rea­son a tie +/​-1 should be much less likely than any other re­sult.

If the elec­tion is bal­anced (the mean of the dis­tri­bu­tion is a tie), and the dis­tri­bu­tion looks any­thing like nor­mal or bino­mial, 1/​X is an un­der­es­ti­mate of P(tie | elec­tion is within vote mar­gin of X), since a tie is ac­tu­ally the most likely re­sult. A tie +/​- 1 is right next to the peak of the curve, so it should also be more than 1/​X.

The 10^-90 figure cited in the pa­per was an ex­am­ple of how the calcu­la­tion is very sen­si­tive to slight im­bal­ances—a 5050 chance for each voter gave a .00006 chance of tie, while 49.9/​50.1 gave the 10^-90. But know­ing that an elec­tion will be very slightly im­bal­anced in one di­rec­tion is a hard epistemic state to get to. Usu­ally we just know some­thing like “it’ll be close”, which could be mod­eled as a dis­tri­bu­tion over pos­si­ble near-bal­ances. If that dis­tri­bu­tion is not it­self skewed ei­ther di­rec­tion, then we again find that in­di­vi­d­ual re­sults near the mean should be at least 1/​X.

• I re­cently wrote about why vot­ing is a ter­rible idea and fell into the same er­ror as Gel­man (I as­sumed 49.9-50.1 a pri­ori is con­ser­va­tive). Wes and gw­ern, thanks for cor­rect­ing me! In fact, due to the Me­dian Voter The­o­rem and with bet­ter and bet­ter pol­ling and anal­y­sis we may as­sume that the dis­tri­bu­tion of voter dis­tri­bu­tions should have a peak at 50-50.

Of course, there are other great rea­sons not to vote (mainly to avoid “en­list­ing in the army” and let­ting your mind be kil­led. My sug­ges­tion is always to find a friend who is a cred­ible threat to vote for the can­di­date you de­spise most and in­vite him to a beer on elec­tion day un­der the con­di­tion that nei­ther of you will vote and you will not talk about poli­tics. Thus, you main­tain your friend­ship while can­cel­ling out the votes. I call it the VAVA (voter anti-voter an­nihila­tion) prin­ci­ple.

• “Poli­tics is the mind­kil­ler” is an ar­gu­ment for why peo­ple should avoid get­ting into poli­ti­cal dis­cus­sion on Less­wrong; it is not an ar­gu­ment against poli­ti­cal in­volve­ment in gen­eral. Ra­tion­al­ists com­pletely re­treat­ing from Poli­tics would likely lower the san­ity wa­ter­line as far as poli­tics is con­cerned. Ra­tion­al­ists should get more in­volved in poli­tics (but out­side Less­wrong) of course.

• If the elec­tion is bal­anced (the mean of the dis­tri­bu­tion is a tie)...

That’s an im­por­tant and non-ob­vi­ous as­sump­tion to make.

a 5050 chance for each voter gave a .00006 chance of tie, while 49.9/​50.1 gave the 10^-90

So, in short, the 10^-90 figure is based on the ex­plicit as­sump­tion that the elec­tion is not bal­anced?

That’s why the two meth­ods you men­tion pro­duce such wildy differ­ent figures; they base their calcu­la­tions on differ­ent ba­sic as­sump­tions. One can ar­gue back and forth about the val­idity or lack thereof of a given set of as­sump­tions, of course...

• That’s an im­por­tant and non-ob­vi­ous as­sump­tion to make.

Yes, I agree.

I’m much more sym­pa­thetic to the 10^-90 es­ti­mate in the pa­per than Gel­man’s quote is; I think he mis­rep­re­sents the au­thors in claiming they as­serted that prob­a­bil­ity, when ac­tu­ally they offered it as a con­di­tional (if you model it this way, then it’s 10^-90).

• One can ar­gue back and forth about the val­idity or lack thereof of a given set of as­sump­tions, of course...

That is why I posted it as a com­ment on this par­tic­u­lar post, af­ter all. It’s clear that our sub­jec­tive prob­a­bil­ity of cast­ing a tie-break­ing vote is go­ing to be far less ex­treme than 10^-90 be­cause our be­lief in the bino­mial ideal­iza­tion be­ing cor­rect puts a much less ex­treme bound on the tie-break­ing vote prob­a­bil­ity than just tak­ing 10^-90 at face value.

• This one seems pretty rele­vant here:

Prob­ing the Im­prob­a­ble: Method­olog­i­cal Challenges for Risks with Low Prob­a­bil­ities and High Stakes—Toby Ord, Ra­faela Hiller­brand, An­ders Sandberg

• Thanks, also added to the wiki page (which now seems to have two re­lated but non-iden­ti­cal top­ics and prob­a­bly needs to split).

• Great post!

The mo­ment the topic came up, I also thought back to some­thing I once heard a cre­ation­ist say. Most amus­ingly, not only did that prob­a­bil­ity have some fatu­ously huge or­der of mag­ni­tude, its man­tissa was quoted to about 5 dec­i­mal places.

One gets ‘tar­get con­fu­sion’ in such cases—shall I point out that no en­g­ineer would ever quote a prob­a­bil­ity like that to their boss, on pain of job loss? Shall I ask if my in­ter­locu­tor even knows what a “power” IS?

• One might be tempted to re­spond “But there’s an equal chance that the false model is too high, ver­sus that it is too low.” Maybe there was a bug in the com­puter pro­gram, but it pre­vented it from giv­ing the in­cum­bent’s real chances of 999,999,999,999 out of a trillion.

I have a differ­ent re­sponse to this than the one you gave.

Con­sider your meta (“out­side”) un­cer­tainty over log-odds, in which in­de­pen­dent ev­i­dence can be added, in­stead of prob­a­bil­ities. A dis­tri­bu­tion that av­er­ages out to the “in­ter­nal” log-odds would, when trans­lated back into prob­a­bil­ities, have an ex­pected prob­a­bil­ity closer to 12 than the “in­side” prob­a­bil­ity.

If you ap­ply this to your prior prob­a­bil­ity as well as the ev­i­dence, this should gen­er­ally move your prob­a­bil­ities to­wards 12.

• If you ap­ply this to your prior prob­a­bil­ity as well as the ev­i­dence, this should gen­er­ally move your prob­a­bil­ities to­wards 12.

This looks wrong to me. You can write your pri­ors as a log-odds, and your pieces of ev­i­dence as sev­eral log-like­li­hood ra­tios, but while it’s it’s fairly ob­vi­ous to me that your meta-un­cer­tainty over log-like­li­hoods sends the ex­tra ev­i­dence to­ward 0 and thus the over­all prob­a­bil­ity to­ward the prior, I don’t see at all why it makes sense to do some­thing analo­gous to the log-odds prior which sends that to 0 and thus the over­all prob­a­bil­ity to 0.5.

What’s go­ing on? Is the ar­gu­ment some­thing like “well I have one pos­si­bil­ity and then not-that-pos­si­bil­ity, so if I look purely at the struc­ture I should say ‘two pos­si­bil­ities, sym­met­ric, 50/​50!’”? I think that works if you gen­er­ate all pos­si­bil­ities in es­ti­ma­tions like this uniformly (esp. a pos­si­bil­ity and its com­ple­ment)? Any­way, IMO it’s a much stric­ter “out­side view” to send your pri­ors to 0.5 than it is to send your ev­i­dence to 0.

• It might help to work an ex­am­ple.

Sup­pose we are in­ter­ested in an event B with prior prob­a­bil­ity P(B) = 12 which is prior log odds L(B) = 0, and have ob­served ev­i­dence E which is worth 1 bit, so L(B|E) = 1 and P(B|E) = 23 ~= .67. But if we are meta un­cer­tain of the strength of ev­i­dence E such that we as­sign prob­a­bil­ity 12 that it is worth 0 bits, and prob­a­bil­ity 12 that it is worth 2 bits, then the ex­pected log odds is EL(B|E) = 1, but the ex­pected prob­a­bil­ity EP(B|E) = (1/​2)*(1/​2) + (1/​2)*(4/​5) = (.5 + .8)/​2 = .65, de­creas­ing to­wards 12 from P(B|E) ~= .67.

But what if in­stead the prior prob­a­bil­ity was P(B) = 15, or L(B) = −2. Then, with the same ev­i­dence with the same meta un­cer­tainty, EL(B|E) = L(B|E) = −1, P(B|E) = 13 ~= .33, and EP(B|E) = .35, this time in­creas­ing to­wards 12.

Note this did not even re­quire meta un­cer­tainty over the prior, only the un­cer­tainty over the to­tal pos­te­rior log-odds is im­por­tant. Also note that even though un­cer­tainty moves the ex­pected prob­a­bil­ity to­wards 12, it does not move the ex­pected log-odds to­wards 0.

• Note that your ob­ser­va­tion does not gen­er­al­ize to more com­plex lo­godds-dis­tri­bu­tions. Here is a sim­ple coun­terex­am­ple:

Let’s say that L(B|E)=1+x with chance 23, and L(B|E)=1-2x with chance 13. It still holds that EL(B|E)=1. But the ex­pected prob­a­bil­ity EP(B|E) is now not a mono­tone func­tion of x. It has a global min­i­mum at x=2.

• x EP(B|E)

• 0 0.66666666666666663

• 1 0.64444444444444438

• 2 0.62962962962962954

• 3 0.63755199049316691

• 4 0.64904862579281186

• 5 0.65706002898985361

• In­deed. It looks like the effect I de­scribed oc­curs when the meta un­cer­tainty is over a small range of log-odds val­ues rel­a­tive to the pos­te­rior log-odds, and there is an­other effect that could pro­duce ar­bi­trary ex­pected prob­a­bil­ities given the right dis­tri­bu­tion over an ar­bi­trar­ily large range of val­ues. For any prob­a­bil­ity p, let L(B|E) = av­er­age + (1-p)*x with prob­a­bil­ity p and L(B|E) = av­er­age—p*x with prob­a­bil­ity (1-p), and then the limit of the ex­pected prob­a­bil­ity as x ap­proaches in­finity is p.

It has a global min­i­mum at x=2.

I no­tice that this is where |1 + x| = |1 − 2x|. That might be in­ter­est­ing to look into.

(Pos­si­ble more rigor­ous and ex­plicit math to fol­low when I can fo­cus on it more)

• I let L(B|E) be uniform from x-s/​2 to x+s/​2 and got that P(B|E) = $\\frac\{1\}\{s\} \\ln\{\\frac\{A e^\{\\frac\{s\}\{2\}\}\+1\}\{A e^\{\-\\frac\{s\}\{2\}\}\+1\}\}$ where A is the odds if L(B|E)=x. In the limit as s goes to in­finity, it looks like the in­ter­est­ing pieces are a term that’s the log of the prior prob­a­bil­ity drop­ping off as s grows lin­early, plus a term that even­tu­ally looks like (1/​s)*ln(e^(s/​2))=1/​2 which means we ap­proach 12.

• Oh I see, I thought you were say­ing some­thing com­pletely differ­ent. :D Yes, it looks like keep­ing the ex­pec­ta­tion of the ev­i­dence con­stant, the fi­nal prob­a­bil­ity will be closer to 0.5 the larger the var­i­ance of the ev­i­dence. I thought you were talk­ing about what our pri­ors should be on how much ev­i­dence we will tend to re­ceive for propo­si­tions in gen­eral from things we in­tuit as one source or some­thing.

• 12 Feb 2012 0:24 UTC
1 point

The map be­ing dis­tinct from the ter­ri­tory, you must go out­side your map to dis­count your prob­a­bil­ity calcu­la­tions made in the map. But how to do this? You must re­sort to a stronger map. But then the calcu­la­tions there are sub­ject to the er­rors in de­sign­ing that map.

You can run this logic down to the deep­est level. How does a ra­tio­nal per­son adopt a Bayesian method­ol­ogy? Is there not some prob­a­bil­ity that the choice of method­ol­ogy is wrong? But how do you con­ceive of that prob­a­bil­ity, when Bayesian con­sid­er­a­tions are the only ones available to eval­u­ate truth from given ev­i­dence?

Why don’t these con­sid­er­a­tions prove that Bayesian episte­mol­ogy isn’t the true ac­count of knowl­edge?

• You are un­wind­ing past the brain that does the un­wind­ing.

A ra­tio­nal agent goes “golly, I seem to im­ple­ment Oc­cam’s Ra­zor, and look­ing at that prin­ci­ple with my cur­rent im­ple­men­ta­tion of Oc­cam’s Ra­zor, it seems like it is a sim­ple hy­poth­e­sis de­scribing that hy­pothe­ses should be sim­ple be­cause the uni­verse is sim­ple.”

That is liter­ally all you can do. If you im­ple­ment anti-oc­camian pri­ors the above goes some­thing like: “It seems like a stochas­tic hy­poth­e­sis de­scribing that hy­pothe­ses should all differ and be com­pli­cated be­cause the uni­verse is com­pli­cated and stochas­tic.”

So, you can­not ‘run this logic down to the deep­est level’ be­cause at the deep­est level there is noth­ing to ar­gue with.

• Why don’t these con­sid­er­a­tions prove that Bayesian episte­mol­ogy isn’t the true ac­count of knowl­edge?

Looks to me like you’ve proved that no one can ever change their be­liefs or method­ol­ogy, so not only have you dis­proven Bayesian episte­mol­ogy, you’ve man­aged to dis­prove ev­ery­thing else too!

• Counter ex­am­ple: I changed my episte­mol­ogy from Aris­totelian to Aris­to­tle + Bayes + fre­quen­tism.

• This is at best weakly re­lated to the statis­tics of er­ror in a com­mu­ni­ca­tions chan­nel. Here, simu­la­tions are of­ten used to run trillions of tri­als to simu­late (monte carlo calcu­late) the con­di­tions to get bit er­ror rates (BER) of 10^-7, 10^-8, and so on. As an en­g­ineer more fa­mil­iar with the phys­i­cal layer (tran­sis­tor am­plifiers, ther­mal noise in chan­nels, scat­ter­ing of RF etc), I know that the CONDITIONS for these monte carlo calcu­la­tions to mean some­thing in the real cir­cuits are com­plex and not as com­mon as the new PhD do­ing the calcu­la­tion thinks they are. Fur­ther, the lower the BER calcu­lated, the more likely some­thing else has come along to bite you on the arse and raise the ac­tual er­ror rate in an ac­tual cir­cuit. STILL, in en­g­ineer­ing pre­sen­ta­tion af­ter pre­sen­ta­tion, peo­ple put these num­bers up and other peo­ple nod gravely when they see them.

Amaz­ingly, I”m find­ing the feel­ing of the post but in er­ror rates which are gi­gan­tic com­pared to the prob­a­bil­ities dis­cussed in the ar­ti­cle. We get wig­gly when a 1 has 6 ze­ros in front of it, you are us­ing ex­po­nen­tial no­ta­tion to avoid writ­ing much longer strings of ze­ros.

Maybe the “great filter” that pre­vents us see­ing a uni­verse filled with at least a few other in­tel­li­gent species is that fi­nally, one of the big physics ex­per­i­ments large smart civ­i­liza­tions build fi­nally does de­stroy the lo­cal so­lar sys­tem. Maybe we should ban suc­ces­sors to the Large Hadron Col­lider un­til we are en­sconced in at least one other so­lar sys­tem.

• In or­der for a sin­gle cell to live, all of the parts of the cell must be as­sem­bled be­fore life starts. This in­volves 60,000 pro­teins that are as­sem­bled in roughly 100 differ­ent com­bi­na­tions. The prob­a­bil­ity that these com­plex group­ings of pro­teins could have hap­pened just by chance is ex­tremely small. It is about 1 chance in 10 to the 4,478,296 power. The prob­a­bil­ity of a liv­ing cell be­ing as­sem­bled just by chance is so small, that you may as well con­sider it to be im­pos­si­ble. This means that the prob­a­bil­ity that the liv­ing cell is cre­ated by an in­tel­li­gent cre­ator, that de­signed it, is ex­tremely large. The prob­a­bil­ity that God cre­ated the liv­ing cell is 10 to the 4,478,296 power to 1.

Note that some­one just gave a con­fi­dence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever hap­pen. This is pos­si­bly the most wrong any­one has ever been.

Par­tic­u­larly in the light of the fact that he seems to have got the num­bers the wrong way round from what he in­tended in the fi­nal sen­tence.

• Did he? I thought he just meant ‘odds’ when he said ‘prob­a­bil­ity’.

• Not re­ally; “The odds that God cre­ated the liv­ing cell are 10 to the 4,478,296 power to 1” would mean that it’s that ridicu­lously im­prob­a­ble that God cre­ated the cell, which is clearly not what that au­thor was ar­gu­ing.

• 7 Dec 2014 16:17 UTC
−1 points
Parent

No, no. The guy’s worse mis­take is not that. If he re­ally thinks that a cell can be jig­sawwed from in­di­vi­d­ual pro­teins etc. and think of all the wa­ter and ions and stuff), in a sin­gle event, then the odds he gives are the odds of God get­ting the cell right.

• 16 Dec 2010 16:05 UTC
1 point

We have hy­poth­e­sis H and ev­i­dence E, and we du­tifully compute

P(H) * P(E | H) /​ P(E)

It sounds like your ad­vice is: don’t up­date yet! Espe­cially if this num­ber is very small. We might have made a mis­take. But then how should we up­date? “Round up” seems prob­le­matic.

• don’t up­date yet!

I read it to mean “up­date again” based on the prob­a­bil­ity that E is flawed. This well tend to ad­just back to­ward your prior.

• While you do that, the prob­a­bil­ity for the es­ti­mate be­ing dy­nam­i­cally un­sta­ble should go up and then down again. Other­wise, you might make some strange de­ci­sions in-be­tween, where the trade­off be­tween wait­ing for new in­for­ma­tion and de­cid­ing right now will be as for the hon­est es­ti­mate and not an in­ter­me­di­ate step in a multi-step up­dat­ing pro­ce­dure with know­ably in­cor­rect in­ter­me­di­ate re­sults.

• I’m not say­ing not to use Bayes’ the­o­rem, I’m say­ing to con­sider very care­fully what to plug into “E”. In the elec­tion ex­am­ple, your ev­i­dence is “A guy on a web­site said that there was a 999,999,999 in a billion chance that the in­cum­bent would win.” You need to com­pute the prob­a­bil­ity of the in­cum­bent win­ning given this ac­tual ev­i­dence (the ev­i­dence that a guy on a web­site said some­thing), not given the ev­i­dence that there re­ally is a 999,999,999/​billion chance. In the cos­mic ray ex­am­ple, your ev­i­dence would be “There’s an ar­gu­ment that looks like it should make a less than 10^20 chance of apoc­a­lypse”, which may have differ­ent ev­i­dence value de­pend­ing on how well your brain judges the way ar­gu­ments look.

EDIT: Or what nerzhin said.

• I think this amounts to say­ing: real-world con­sid­er­a­tions force an up­per bound on abs(log(P(E | H) /​ P(E))). I’m on board with that, but can we think about how to com­pute and in­crease this bound?

• P(E) can be bro­ken down into P(E|A)P(A) + P(E|~A)P(~A). Our temp­ta­tion, when look­ing at a model, is to treat P(E|~A)*P(~A) as smaller than it re­ally is—the ques­tion is, “Is the num­ber of wor­lds in which the hy­poth­e­sis is false but the ev­i­dence ex­ists any­way large or small?” Yvain is not­ing that, be­cause we are crazy, we tend to for­get about many (or most) of these wor­lds when look­ing at ev­i­dence. We should ex­pect the num­ber of these wor­lds to be much larger than the num­ber of wor­lds in which our prob­a­bil­ilty calcu­la­tions are ev­ery­where and always cor­rect.

The math doesn’t work out to “round up” ex­actly. It’s situ­a­tion-de­pen­dent. It’s en­tirely pos­si­ble that the model is so ill-speci­fied that ev­ery vari­able has the wrong sign. The math will usu­ally work out to de­vi­a­tion to­wards pri­ors, even if only slightly.

Here’s a post on the same prob­lem in so­cial sci­ences.

• What’s A?

“De­vi­a­tion to­wards pri­ors” sounds again like we are posit­ing a bound on log(P(E|H)/​P(E)). How can I es­ti­mate this bound?

• I spec­u­late there’s at least two prob­lems with the cre­ation­ism odds calcu­la­tion. First, it looks like the per­son do­ing the calcu­la­tion was work­ing with maybe 60,000 pro­tein molecules rather than zillions of pro­tein molecules.

The sec­ond prob­lem I’m hav­ing trou­ble putting pre­cisely in words, con­cern­ing the use of the uniform dis­tri­bu­tion as a prior. Some­times the use of the uniform dis­tri­bu­tion as a prior seems to me to be en­tirely jus­tified. An ex­am­ple of this is where there is a well-con­structed model as to sub­se­quent out­comes.

Other times, when the model for sub­se­quent out­comes is sketchy, the uniform dis­tri­bu­tion is used as a prior sim­ply as a de­fault. Or, as in this case, it’s clearly not an ap­pro­pri­ate prior. In this case, the per­son is prob­a­bly as­sum­ing that all com­bi­na­tions of pro­teins are equally likely (I sus­pect this as­sump­tion is false.)

• Con­sider that 1) There is more than one pos­si­ble ar­range­ment of pro­teins which qual­ifies as a liv­ing cell, and that 2) the ma­te­ri­als of which pro­teins are made had quite a long time to shuffle around and try out differ­ent con­figu­ra­tions be­tween when the earth cooled and the pre­sent day, to say noth­ing of other planets el­se­where in the uni­verse, and that 3) once a liv­ing, self-repli­cat­ing, self-re­pairing cell has come to ex­ist in an area with ap­pro­pri­ate raw ma­te­ri­als and a steady en­ergy source it will cre­ate more such cells, so it only has to hap­pen once.

So, we’re look­ing at a sam­ple size equal to, by my back-of-the-en­velope es­ti­ma­tion, the num­ber of cell-sized vol­umes in Earth’s at­mo­sphere and oceans, times the num­ber of planck in­stants in a lit­tle over four billion years, times the num­ber of earth-like planets in the uni­verse. The ac­tual uni­verse, not just the part we can see.

For in­tel­li­gent de­sign to be the most rea­son­able ex­pla­na­tion, the prob­a­bil­ity of life emerg­ing spon­ta­neously would have to be low enough that, in a sam­ple of that size, we wouldn’t ex­pect to see it hap­pen even once, and, fur­ther­more, the de­signer’s own ori­gin would need to be ex­plained in such a way as to be less im­prob­a­ble.

• You shouldn’t use Planck times un­less the pro­tean can re­ar­range them­selves that quickly.

• If the tem­per­a­ture is high enough that there’s molec­u­lar move­ment at all, you could ob­serve a col­lec­tion of pro­teins ev­ery Planck-in­stant and see a (slightly) differ­ent ar­range­ment each time. You might be stuck with similar ones, es­pe­cially sta­ble con­figu­ra­tions, for a long time… but that’s ex­actly the sort of bias that makes life pos­si­ble.

• Isn’t the prob­lem more like: they are ig­nor­ing the huge num­ber of bits of ev­i­dence that say that cells in fact ex­ist. They aren’t com­par­ing be­tween hy­pothe­ses that say cells ex­ist. They are com­par­ing the uniform prior for cells ex­ist­ing to a the prior for only ran­dom pro­teins ex­ist­ing. They sound more like they are try­ing to ar­gue that all our ex­pe­riences can­not be enough ev­i­dence that there are cells, which seems weird.

• This is a mis­in­ter­pre­ta­tion. The ar­gu­ment goes like this:

True state­ment: There is lots of ev­i­dence or cells. P(Ev­i­dence|Cells)/​P(Ev­i­dence|~Cells)>>1.

False state­ment: Without in­tel­li­gent de­sign, cells could only be pro­duced by ran­dom chance. P(Cells|~God) is very very small.

De­bat­able state­ment: P(Cells|God) is large.

Con­clu­sion: We up­date mas­sively in fa­vor of God and against ~God, be­cause of, not in op­po­si­tion to, the mas­sive ev­i­dence in fa­vor of the ex­is­tence of cells.

This is valid Bayesian up­dat­ing, it’s just that the false state­ment is false.

• False state­ment: Without in­tel­li­gent de­sign, cells could only be pro­duced by ran­dom chance. P(Cells|~God) is very very small.

You’re ab­solutely right! This is one of the key mis­taken be­liefs that cre­ation­ists hold. I’ve had the most suc­cess in con­vinc­ing them oth­er­wise (or at least mak­ing them doubt) us­ing the ar­gu­ment given by Dawk­ins in The God Delu­sion:

Our like­li­hood heuris­tic is strongly tied to both our lifes­pans and the sub­jec­tive rate at which we ex­pe­rience time pass­ing. Ex­am­ple: if we lived hun­dreds of times longer, cur­rent prob­a­bil­ities of, say, dy­ing in a car ac­ci­dent, would ap­pear to­tally un­ac­cept­able, be­cause the ex­pected num­ber of car ac­ci­dents in our life­time would cor­re­spond­ing be hun­dreds of times higher.

The hun­dreds of mil­lions of years be­tween the for­ma­tion of the Earth and the ap­pear­ance of life are sim­ply much too large of a time-span for our like­li­hood heuris­tic to ap­ply, and do­ing some sim­ple math [omit­ted; if some­one wants to give some ap­prox­i­mate num­bers that’d be nice] shows that the prob­a­bil­ity of repli­ca­tors aris­ing in that time-span is far from neg­ligible.

• Upvoted for suc­cess­fully cor­rect­ing my con­fu­sion about this ex­am­ple and helping me get up­dat­ing a lit­tle bet­ter.

Edit: wow, this was a re­ally old com­ment re­ply. How did I just no­tice it...

• Very in­ter­est­ing prin­ci­ple, and one which I will bear in mind since I very re­cently had a spec­tac­u­lar failure to ap­ply it.

What hap­pens if we ap­ply this type of think­ing to Bayesian prob­a­bil­ity in gen­eral? It seems like we have to as­sign a small amount of prob­a­bil­ity to the claim that all our es­ti­mates are wrong, and that our meth­ods for com­ing to those es­ti­mates are ir­re­deemably flawed. This seems prob­le­matic to me, since I have no idea how to treat this prob­a­bil­ity, we can’t use Bayesian up­dat­ing on it for ob­vi­ous rea­sons.

Any­one have an idea about how to deal with this? Prefer­ably a bet­ter idea than “just don’t think about it” which is my cur­rent strat­egy.

• The is­sue is ba­si­cally that the ideal­ized Bayesian agent is as­sumed to be log­i­cally om­ni­scient and hu­mans clearly are not. It’s an open prob­lem in the Bayesian episte­mol­ogy liter­a­ture.

• What hap­pens if we ap­ply this type of think­ing to Bayesian prob­a­bil­ity in gen­eral? It seems like we have to as­sign a small amount of prob­a­bil­ity to the claim that all our es­ti­mates are wrong, and that our meth­ods for com­ing to those es­ti­mates are ir­re­deemably flawed. This seems prob­le­matic to me, since I have no idea how to treat this prob­a­bil­ity, we can’t use Bayesian up­dat­ing on it for ob­vi­ous rea­sons.

There is an Eliezer post on just this sub­ject. Any­one re­mem­ber the ti­tle?

• I’ve been look­ing through some of Eliezer’s posts on the sub­ject and the clos­est I’ve come is “Where Re­cur­sive Jus­tifi­ca­tion Hits Bot­tom”, which looks at the prob­lem that if you start with a suffi­ciently bad prior you will never at­tain ac­cu­rate be­liefs.

This is a slightly differ­ent prob­lem to the one I pointed out (though no less se­ri­ous, in fact I would say it’s more likely by sev­eral or­ders of mag­ni­tude). How­ever, un­like that case, where there re­ally is noth­ing you can do but try to self im­prove and hope you started above the cut-off point, my prob­lem seems like it might have an ac­tual solu­tion, I just can’t see what it is.

• You might be think­ing of Ends Don’t Jus­tify Means, which con­sid­ers the ques­tion “What if I’m run­ning on cor­rupt hard­ware”. It doesn’t ac­tu­ally say much about how a (would-be) ra­tio­nal agent ought to ad­just its opinion-form­ing mechanisms to deal with that pos­si­bil­ity, though.

[EDITED to re­move su­perflu­ous apos­tro­phe.]

• Any­one have an idea about how to deal with this?

I have been toy­ing with an idea for this based on an anal­ogy to evolu­tion­ary biol­ogy.

An or­ganism at­tempts to adapt to the en­vi­ron­ment it at­tempts to find it­self in, up to the limits al­lowed by its ge­netic pro­gram­ming. But a pop­u­la­tion of or­ganisms, all ex­posed to the same en­vi­ron­ment, can adapt even fur­ther—by mu­tat­ing the ge­netic pro­gram­ming of some of its mem­bers, and then us­ing nat­u­ral se­lec­tion to change the rel­a­tive pro­por­tions of differ­ent genomes in the pop­u­la­tion.

Similarly, a Bayesian at­tempts to ad­just his be­lief prob­a­bil­ities ac­cord­ing to the ev­i­dence he is ex­posed to, up to the limits al­lowed by his sys­tem of core as­sump­tions and pri­ors. But a pop­u­la­tion of Bayesi­ans, all ex­posed to the same ev­i­dence, can ad­just even fur­ther—by mu­tat­ing pri­ors and core be­liefs, and then us­ing a se­lec­tion pro­cess to ex­tin­guish those be­lief sys­tems that don’t work well in prac­tice and to repli­cate var­i­ants that do perform well.

Now, imag­ine that this pop­u­la­tion of Bayesi­ans ex­ists within the head of a sin­gle ra­tio­nal agent (well, al­most ra­tio­nal) and that de­ci­sion mak­ing is done by some kind of pro­por­tional vot­ing scheme (with neu­ral-net-like back-feed­back).

In this scheme, as­sign­ing prob­a­bil­ities of 0 or 1 to propo­si­tions is OK for a mem­ber of this Bayesian pop­u­la­tion. If that as­sign­ment is never re­futed, then there is some effi­ciency in re­mov­ing the ep­silons from the calcu­la­tions. How­ever, such a sub-agent risks be­ing ex­tin­guished should con­tra­dic­tory ev­i­dence ever arise.

• A true Bayesian is epistem­i­cally perfect. I could have differ­ent sub­rou­tines com­put­ing es­ti­mates con­di­tional on differ­ent chunks of my prior as a way to ap­prox­i­mate true Bayesi­anism, but if you have ac­cess to one Bayesian, you don’t need an­other.

• Are you 100% sure about that?

• I don’t know how to com­pute be­liefs, con­di­tional on it be­ing false.

• My point is that there are some propo­si­tions—for in­stance the epistemic perfec­tion of Bayesi­anism—to which you at­tach a prob­a­bil­ity of ex­actly 1.0. Yet you want to re­main free to re­ject some of those “100% sure” be­liefs at some fu­ture time, should ev­i­dence or ar­gu­ment con­vince you to do so. So, I am ad­vis­ing you to have one Bayesian in your head who be­lieves the ‘ob­vi­ous’, and at least one who doubts it. And then if the ob­vi­ous ever be­comes falsified, you will still have one Bayesian you can trust.

• I don’t think the other guy counts as a Bayesian.

That’s definitely a good ap­prox­i­ma­tion of the or­ga­ni­za­tional struc­ture of the hu­man mind of an im­perfect Bayesian. You have a hu­man con­scious­ness simu­lat­ing a Bayesian prob­a­bil­ity-com­puter, but the hu­man con­tains heuris­tics pow­er­ful enough to, in some situ­a­tions, over­rule the Bayesian.

This has noth­ing to do with ar­gu­ments, though.

• This doesn’t re­ally solve the prob­lem. If Bayesian up­dat­ing is flawed, and all the sub-agents use Bayesian up­dat­ing, then they are all un­trust­wor­thy. A bet­ter ap­proach might be to make some of the agents non-Bayesian (giv­ing them very low ini­tial weights). How­ever, this only pushes back the prob­lem, as it re­quires me to put 100% of my con­fi­dence in your method, rather than in Bayes the­o­rem.

• If Bayesian up­dat­ing is flawed, and all the sub-agents use Bayesian up­dat­ing, then they are all un­trust­wor­thy.

But Bayesian up­dat­ing is not flawed. What may be flawed are prior as­sump­tions and prob­a­bil­ities. All of the sub­agents should be Bayesian be­cause Bayes’s the­o­rem is the one unique solu­tion to the prob­lem of up­dat­ing. But there is no one unique solu­tion to the prob­lem of ax­io­m­a­tiz­ing logic and physics and on­tol­ogy. No one unique way to choose pri­ors. That is where choos­ing a va­ri­ety of solu­tions and choos­ing among them us­ing a nat­u­ral se­lec­tion pro­cess can be use­ful.

• The prob­lem I was speci­fi­cally ask­ing to solve is “what if Bayesian up­dat­ing is flawed”, which I thought was an ap­pro­pri­ate dis­cus­sion on an ar­ti­cle about not putting all your trust in any one sys­tem.

Bayes the­o­rem looks solid, but I’ve been wrong about the­o­rems be­fore. So has the math­e­mat­i­cal com­mu­nity (al­though not very of­ten and not for this long, but it could hap­pen and should not be as­signed 0 prob­a­bil­ity). I’m slightly scep­ti­cal of the unique­ness claim, given I’ve of­ten seen similar proofs which are math­e­mat­i­cally sound, but make cer­tain as­sump­tions about what it al­lowed, and are thus vuln­er­a­ble to out-of-the-box solu­tions (Ar­row’s im­pos­si­bil­ity the­o­rem is a good ex­am­ple of this). In fact, given that a sig­nifi­cant pro­por­tion of statis­ti­ci­ans are not Bayesi­ans, I re­ally don’t think this is a good time for ab­solute faith.

To give an­other ex­am­ple, sup­pose to­mor­row’s main page ar­ti­cle on LW is about an in­ter­est­ing the­o­rem in Bayesian prob­a­bil­ity, and one which would af­fect the way you up­date in cer­tain situ­a­tions. You can’t quite un­der­stand the proof your­self, but the ar­ti­cle’s writer is some­one whose math­e­mat­i­cal abil­ity you re­spect. In the com­ments, some other peo­ple ex­press con­cern with cer­tain parts of the proof, but you still can’t quite see for your­self whether its right or wrong. Do you ap­ply it?

• “what if Bayesian up­dat­ing is flawed”

As­sign a prob­a­bil­ity 1-ep­silon to your be­lief that Bayesian up­dat­ing works. Your be­lief in “Bayesian up­dat­ing works” is de­ter­mined by Bayesian up­dat­ing; you there­fore be­lieve with 1-ep­silon prob­a­bil­ity that “Bayesian up­dat­ing works with prob­a­bil­ity 1-ep­silon”. The base level be­lief is then held with prob­a­bil­ity less than 1-ep­silon.

As the re­cur­sive na­ture of hold­ing Bayesian be­liefs about be­liev­ing Bayesi­anly al­lows chains to tend to­ward large num­bers, the prob­a­bil­ity of the base level be­lief tends to­wards zero.

There is a flaw with Bayesian up­dat­ing.

I think this is just a semi-for­mal ver­sion of the prob­lem of in­duc­tion in Bayesian terms, though. Un­for­tu­nately the an­swer to the prob­lem of in­duc­tion was “pre­tend it doesn’t ex­ist and things work bet­ter”, or some­thing like that.

• I think this is a form of dou­ble-count­ing the same ev­i­dence. You can only perform Bayesian up­dat­ing on in­for­ma­tion that is new; if you try to up­date on in­for­ma­tion that you’ve already in­cor­po­rated, your prob­a­bil­ity es­ti­mate shouldn’t move. But if you take in­for­ma­tion you’ve already in­cor­po­rated, shuffle the terms around, and pre­tend it’s new, then you’re in­tro­duc­ing fake ev­i­dence and get an in­cor­rect re­sult. You can add a term for “Bayesian up­dat­ing might not work” to any model, ex­cept to a model that already ac­counts for that, as mod­els of the prob­a­bil­ity that Bayesian up­dat­ing works surely do. That’s what’s hap­pen­ing here; you’re adding “there is an ep­silon prob­a­bil­ity that Bayesian up­dat­ing doesn’t work” as ev­i­dence to a model that already uses and con­tains that in­for­ma­tion, and count­ing it twice (and then count­ing it n times).

• You can also fash­ion a similar prob­lem re­gard­ing pri­ors.

• Deter­mine what method you should use to as­sign a prior in a cer­tain situ­a­tion.

• Then de­ter­mine what method you should use to as­sign a prior to “I picked the wrong method to as­sign a prior in that situ­a­tion”.

• Then de­ter­mine what method you should to as­sign a prior to “I picked the wrong method to as­sign a prior to “I picked the wrong method to as­sign a prior in that situ­a­tion” ”.

This doesn’t seem like dou­ble-count­ing of any­thing to me; at no point can you as­sume you have picked the right method for any prior-as­sign­ing with prob­a­bil­ity 1.

• This one is differ­ent, in that the ev­i­dence you’re in­tro­duc­ing is new. How­ever, the mag­ni­tude of the effect of each new piece of ev­i­dence on your origi­nal prob­a­bil­ity falls off ex­po­nen­tially, such that the origi­nal prob­a­bil­ity con­verges.

• I’m pretty sure there is an er­ror in your rea­son­ing. And I’m pretty sure the source of the er­ror is an un­war­ranted as­sump­tion of in­de­pen­dence be­tween propo­si­tions which are ac­tu­ally en­tan­gled—in fact, log­i­cally equiv­a­lent.

But I can’t be sure there is an er­ror un­less you make your ar­gu­ment more for­mal (i.e. sym­bol in­ten­sive).

• I think it would take the form of X be­ing an out­come, p(X) be­ing the prob­a­bil­ity of the out­come as de­ter­mined by Bayesian up­dat­ing, “p(X) is cor­rect” be­ing the out­come Y, p(Y) be­ing the prob­a­bil­ity of the out­come as de­ter­mined by Bayesian up­dat­ing, “p(Y) is cor­rect” be­ing the out­come Z, and so forth.

If you have any par­tic­u­lar style or method of for­mal­is­ing you’d like me to use, men­tion it, and I’ll see if I can rephrase it in that way.

• I don’t un­der­stand the phrase “p(X) is cor­rect”.

Also I need a sketch of the ar­gu­ment that went from the prob­a­bil­ity of one propo­si­tion be­ing 1-ep­silon to the prob­a­bil­ity of a differ­ent propo­si­tion be­ing smaller than 1-ep­silon.

• p(X) is a mea­sure of my un­cer­tainty about out­come X—“p(X) is cor­rect” is the out­come where I de­ter­mined my un­cer­tainty about X cor­rectly. There are also out­comes where I in­cor­rectly de­ter­mined my un­cer­tainty about X. I there­fore need to have a mea­sure of my un­cer­tainty about out­come “I de­ter­mined my un­cer­tainty cor­rectly”.

Also I need a sketch of the ar­gu­ment that went from the prob­a­bil­ity of one propo­si­tion be­ing 1-ep­silon to the prob­a­bil­ity of a differ­ent propo­si­tion be­ing smaller than 1-ep­silon.

The ar­gu­ment went from the ini­tial prob­a­bil­ity of one propo­si­tion be­ing 1-ep­silon to the up­dated prob­a­bil­ity of the same propo­si­tion be­ing less than 1-ep­silon, be­cause there was higher-or­der un­cer­tainty which mul­ti­plies through.

A toy ex­am­ple: We are 90% cer­tain that this ob­ject is a blegg. Then, we re­ceive ev­i­dence that our method for de­ter­min­ing 90% cer­tainty gives the wrong an­swer one case in ten. We are 90% cer­tain that we are 90% cer­tain, or in other words—we are 81% cer­tain that the ob­ject in ques­tion is a blegg.

Now that we’re 81% cer­tain, we re­ceive ev­i­dence that our method is flawed one case in ten—we are now 90% cer­tain that we are 81% cer­tain. Or, we’re 72.9% cer­tain. Etc. Ob­vi­ously ep­silon de­grades much slower, but we don’t have any rea­son to stop ap­ply­ing it to it­self.

• Thank-you for ex­press­ing my worry in much bet­ter terms than I man­aged to. If you like, I’ll link to your com­ment in my top-level com­ment.

I still don’t know why ev­ery­one thinks this is the prob­lem of in­duc­tion. You can cer­tainly have an agent which is Bayesian but doesn’t use in­duc­tion (the prior which as­signs equal prob­a­bil­ity to all pos­si­ble se­quences of ob­ser­va­tion is non-in­duc­tive). I’m not sure if you can have a non-Bayesian that uses in­duc­tion, be­cause I’m very con­fused about the whole sub­ject of ideal non-Bayesian agents, but it seems like you prob­a­bly could.

In­ter­est­ing that Bayesian up­dat­ing seems to be flawed if an only if you as­sign non-zero prob­a­bil­ity to the claim that is flawed. If I was feel­ing mischievous I would com­pare it to a re­li­gion, it works so long as you have ab­solute faith, but if you doubt even for a mo­ment it doesn’t.

• I still don’t know why ev­ery­one thinks this is the prob­lem of in­duc­tion.

It’s similar to Hume’s philo­soph­i­cal prob­lem of in­duc­tion (here and here speci­fi­cally). In­duc­tion in this sense is con­trasted with de­duc­tion—you could cer­tainly have a Bayesian agent which doesn’t use in­duc­tion (never draws a gen­er­al­i­sa­tion from spe­cific ob­ser­va­tions) but I think it would nec­es­sar­ily be less effi­cient and less effec­tive than a Bayesian agent that did.

• If you like, I’ll link to your com­ment in my top-level com­ment.

Feel free! I am all for in­creas­ing the num­ber of minds churn­ing away at this prob­lem—the more Bayesi­ans that are try­ing to find a way to jus­tify Bayesian meth­ods, the higher the prob­a­bil­ity that a cor­rect jus­tifi­ca­tion will oc­cur. As­sum­ing we can weed out the mo­ti­vated or bi­ased jus­tifi­ca­tions.

• If you like, I’ll link to your com­ment in my top-level com­ment.

Feel free! I am all for in­creas­ing the num­ber of minds churn­ing away at this prob­lem—the more Bayesi­ans that are try­ing to find a way to jus­tify Bayesian meth­ods, the higher the prob­a­bil­ity that a cor­rect jus­tifi­ca­tion will oc­cur. As­sum­ing we can weed out the mo­ti­vated or bi­ased jus­tifi­ca­tions.

• I’d love to see some­one like EY tackle the above com­ment.

On a side note, why do I get an er­ror if I click on the user­name of the par­ent’s au­thor?

• I’m ac­tu­ally plan­ning on tack­ling it my­self in the next two weeks or so. I think there might be a solu­tion that has a de­duc­tive jus­tifi­ca­tion for in­duc­tive rea­son­ing. EY has already tack­led prob­lems like this but his post seems to be a much stronger var­i­ant on Hume’s “it is cus­tom, and it works”—plus a dis­tinc­tion be­tween self-re­flec­tive loops and cir­cu­lar loops. That dis­tinc­tion is how I cur­rently ra­tio­nal­ise ig­nor­ing the prob­lem of in­duc­tion in ev­ery­day life.

Also—I too do not know why I don’t have an overview page.

• I’ve of­ten seen similar proofs which are math­e­mat­i­cally sound, but make cer­tain as­sump­tions about what it al­lowed, and are thus vuln­er­a­ble to out-of-the-box solu­tions (Ar­row’s im­pos­si­bil­ity the­o­rem is a good ex­am­ple of this).

You have piqued my cu­ri­os­ity. A trick to get around Ar­row’s the­o­rem? Do you have a link?

Re­gard­ing your main point: Sure, If you want some mem­bers of your army of mu­tant ra­tio­nal agents to be so mu­tated that they are no longer even Bayesi­ans, well … go ahead. I sup­pose I have more faith in the rough val­idity of trial-and-er­ror em­piri­cism than I do in Bayes’s the­o­rem. But not much more faith.

• I’m afraid I don’t know how to post links.

I think there is already a main-page ar­ti­cle on this sub­ject, but the gen­eral idea is that Ar­row’s the­o­rem as­sumes the vot­ing sys­tem is prefer­en­tial (you vote by rank­ing vot­ers) and so you can get around it with a non-prefer­en­tial sys­tem.

Range vot­ing (each voter gives each can­di­date as score out of ten, and the can­di­date with the high­est to­tal wins) is the one that springs most eas­ily to mind, but it has prob­lems of its own, so some­body who knows more about the sub­ject can prob­a­bly give you a bet­ter ex­am­ple.

As for the main point, I doubt you ac­tu­ally put 100% con­fi­dence in ei­ther idea. In the un­likely event that ei­ther ap­proach led you to a con­tra­dic­tion, would you just curl up in a ball and go in­sane, or aban­don it.

• I think there is already a main-page ar­ti­cle on this sub­ject, but the gen­eral idea is that Ar­row’s the­o­rem as­sumes the vot­ing sys­tem is prefer­en­tial (you vote by rank­ing vot­ers) and so you can get around it with a non-prefer­en­tial sys­tem.

Ah. You mean this post­ing. It is a good ar­ti­cle, and it sup­ports your point about not trust­ing proofs un­til you read all of the fine print (with the warn­ing that there is always some fine print that you miss read­ing).

But it doesn’t re­ally over­throw Ar­row. The “workaround” can be “gamed” by the play­ers if they ex­ag­ger­ate the differ­ences be­tween their choices so as to skew the fi­nal solu­tion in their own fa­vor.

• All de­ter­minis­tic non-dic­ta­to­rial sys­tems can be gamed to some ex­tent (Gib­bard Sat­terth­waite the­o­rem, I’m rea­son­ably con­fi­dent that this one doesn’t have a work-around) al­though range vot­ing is worse than most. That doesn’t change the fact that it is a counter-ex­am­ple to Ar­row.

A bet­ter one might be ap­proval vot­ing, where you have as many votes as you want but you can’t vote for the same can­di­date more than once (equiv­a­lent to a the de­gen­er­ate case of rang­ing where there are only two rank­ings you can give.

Thanks for the help with the links.

• I’m afraid I don’t know how to post links.

Next time you com­ment, click on the Help link to the lower right of the com­ment edit­ing box.

• But it’s hard for me to be prop­erly out­raged about this, be­cause the con­clu­sion that the LHC will not de­stroy the world is cor­rect.

What is your ar­gu­ment for claiming that the LHC will not de­stroy the world?

That the world still ex­ists albeit on­go­ing ex­per­i­ments is eas­ily ex­plained by the fact that we are nec­es­sar­ily liv­ing in those branches of the uni­verse where the LHC didn’t de­stroy the world. (On an re­lated side note: Has the great filter been found yet?)

• Good point. I’ve changed this to “since the LHC did not de­stroy the world”, which is true re­gard­less of whether it de­stroyed other branches.

• “This per­son be­lieves he could make one state­ment about an is­sue as difficult as the ori­gin of cel­lu­lar life per Planck in­ter­val, ev­ery Planck in­ter­val from the Big Bang to the pre­sent day, and not be wrong even once” only brings us to 1/​10^61 or so.”

Wouldn’t that be 1/​ 2^(10^61) or am I miss­ing some­thing?

• On the LHC black holes vs cos­mic ray black holes, both kinds of black holes emerge with nonzero charge and will very rapidly brake to a halt. And there’s cos­mic rays hit­ting neu­tron stars, as well, and cos­mic rays col­lid­ing in the mag­netic field of neu­tron stars, LHC style. Bot­tom line is, the HLC has to be ex­tremely ex­cep­tional to de­stroy the earth. It just doesn’t look this ex­cep­tional.

The thing is that a very tiny black hole has in­cred­ibly low ac­cre­tion rate (quite re­li­able ar­gu­ment here; it takes a long time to push Earth through a nee­dle’s eye, even at very high pres­sure) and even if we had many of those in­side stars, planets, etc. we would never know. The HLC may have ‘doomed’ the Earth—to be de­stroyed in many billions years times­pan.

The more in­ter­est­ing ex­am­ple would be PRA—prob­a­bil­is­tic risk anal­y­sis—such as done for space shut­tle, nu­clear re­ac­tors, et cetera. The risk is calcu­lated based on a sum of risks over very small se­lec­tion of events (picked out of the space of pos­si­ble events), and the minus­cule risk figures that get calcu­lated is rep­re­sen­ta­tive not of low prob­a­bil­ity of failure but of low prob­a­bil­ity that the failure will be among the N guesses.

At same time, we have no good rea­son to be­lieve PRA works at all, and a plenty of ex­am­ples (Space Shut­tle, nu­clear re­ac­tors) where PRA was found off by a fac­tor of 1000 (high con­fi­dence re­sult ’cause its highly un­likely space shut­tle PRA was cor­rect yet two were lost).

The way I’d de­scribe PRA is as es­ti­mat­ing failure rate of a ball bear­ing in a car by adding up failure rates of the in­di­vi­d­ual balls and other com­po­nents. That’s ob­vi­ously ab­surd; the balls and their en­vi­ron­ment in­ter­act in such com­pli­cated, non-lin­ear ways that you can’t pre­dict their failure rates by adding up com­po­nent failure rates.

If a method clearly won’t work for some­thing as sim­ple as a ball bear­ing, why would any­one as­sume it’d work for space shut­tle or nu­clear power plant which are much much more com­plex than ball bear­ing? My the­ory is that those things are so com­pli­cated that a per­son has such difficulty of rea­son­ing about them as to be un­able to even see that they are too com­plex for PRA to work; while ball bear­ing is sim­ple enough. At same time there’s a de­mand for some num­ber to be given; this de­mand cre­ates pseu­do­science.

• I’m a bit irked by the con­tinued per­sis­tence of “LHC might de­stroy the world” noise. Given no ev­i­dence, the prior prob­a­bil­ity that micro­scopic black holes can form at all, across all pos­si­ble sys­tems of physics, is ex­tremely small. The same the­ory (String The­ory[1]) that has led us to sug­gest that micro­scopic black holes might form is also quite adamant that all black holes evap­o­rate, and just as adamant that micro­scopic ones evap­o­rate faster than larger ones, by a pre­cise fac­tor of the mass ra­tio cubed. If we think the the­ory is talk­ing com­plete non­sense, then the pos­te­rior prob­a­bil­ity of an LHC black hole form­ing in the first place goes down, be­cause we slide back to the prior of a uni­verse with­out micro­scopic black holes.

Thus, the “LHC might de­stroy the world” noise boils down to the pos­si­bil­ity that (A) there is some math­e­mat­i­cally con­sis­tent post-GR, micro­scopic-black-hole-pre­dict­ing the­ory that has mas­sively slower evap­o­ra­tion, (B) this un­named and pos­si­bly non-ex­is­tent the­ory is less Kol­mogorov-com­plex and hence more pos­te­rior-prob­a­ble than the one that sci­en­tists are cur­rently us­ing[2], and (C) sci­en­tists have com­pletely over­looked this un­named and pos­si­bly non-ex­is­tent the­ory for decades, strongly sug­gest­ing that it has a large Leven­shtein dis­tance from the cur­rently fa­vored the­ory. The si­mul­ta­neous satis­fac­tion of these three crite­ria seems… pretty fing un­likely, since each tends to re­ject the oth­ers. A/​B: it’s hard to imag­ine a the­ory that pre­dicts post-GR physics with LHC-scale micro­scopic black holes that’s more Kol­mogorov-sim­ple than String The­ory, which can ac­tu­ally be speci­fied pretty damn com­pactly. B/​C: peo­ple already have ex­plored the Kol­mogorov-sim­ple space of post-New­to­nian the­o­ries pretty heav­ily, and even the sim­ple post-GR the­o­ries are pretty well ex­plored, mak­ing it un­likely that even a the­ory with large edit dis­tance from ei­ther ST or SM+GR has been over­looked. C/​A: it seems like a hell of a co­in­ci­dence that a large-edit-dis­tance the­ory, i.e. one ex­tremely dis­similar to ST, would just hap­pen to also pre­dict the for­ma­tion of LHC-scale micro­scopic black holes, then* go on to pre­dict that they’re sta­ble* on the or­der of hours or more by throw­ing out the mass-cubed rule[3], then* go on to ex­plain why we don’t see them by the billions de­spite their claimed sta­bil­ity. (If the ones from cos­mic rays are so fast that the re­sult­ing black holes zip through Earth, why haven’t they eaten Jupiter, the Sun, or other nearby stars yet? Bom­bard­ment by cos­mic rays is not unique to Earth, and there are plenty of ce­les­tial bod­ies that would be heavy enough to cap­ture the prod­ucts.)

[1] It’s worth not­ing that our best the­ory, the Stan­dard Model with Gen­eral Rel­a­tivity, does not pre­dict micro­scopic black holes at LHC en­er­gies. Only String The­ory does: ST’s 11-di­men­sional com­pactified space is sup­posed to sud­denly de­com­pactify at high en­ergy scales, mak­ing grav­ity much more pow­er­ful at small scales than GR pre­dicts, thus al­low­ing black hole for­ma­tion at ab­nor­mally low en­er­gies, i.e. those ac­cessible to LHC. And GR with­out the SM doesn’t pre­dict micro­scopic black holes. At all. Naked GR only pre­dicts su­per­nova-sized black holes and larger.

[2] The biggest pain of SM+GR is that, even though we’re pretty damn sure that that train wreck can’t be right, we haven’t been able to find any dis­con­firm­ing data that would lead the way to a bet­ter the­ory. This means that, if the cor­rect the­ory were more Kol­mogorov-com­plex than SM+GR, then we would still be forced as ra­tio­nal­ists to trust SM+GR over the cor­rect the­ory, be­cause there wouldn’t be enough Bayesian ev­i­dence to dis­crim­i­nate the com­plex-but-cor­rect the­ory from the countless com­plex-but-wrong the­o­ries. Thus, if we are to be con­vinced by some al­ter­na­tive to SM+GR, ei­ther that al­ter­na­tive must be Kol­mogorov-sim­pler (like String The­ory, if that pans out), or that al­ter­na­tive must sug­gest a clear ex­per­i­ment that leads to a di­rect dis­con­fir­ma­tion of SM+GR. (The more-com­plex al­ter­na­tive must also some­how at­tract our at­ten­tion, and also hint that it’s worth our time to calcu­late what the clear ex­per­i­ment would be. Sim­ple the­o­ries get eye­balls, but there are lots of more-com­plex the­o­ries that we never bother to pon­der be­cause that solu­tion-space doesn’t look like it’s worth our time.)

[3] Even if they were sta­ble on the or­der of sec­onds to min­utes, they wouldn’t de­stroy the Earth: the re­sult­ing black holes would be smaller than an atom, in fact smaller than a pro­ton, and since atoms are mostly empty space the black hole would sail through atoms with low prob­a­bil­ity of col­li­sion. I re­call that some­one fa­mil­iar with the physics did the math and calcu­lated that an LHC-sized black hole could swing like a pen­du­lum through the Earth a hun­dred times be­fore gob­bling up even a sin­gle pro­ton, and the same calcu­la­tion showed it would take over 100 years be­fore the black hole grew large enough to start col­laps­ing the Earth due to tidal forces, as­sum­ing zero evap­o­ra­tion. Keep in mind that the rele­vant com­pu­ta­tion, t = (5120 × π × G^2 × M^3) ÷ (ℏ × c^4), shows that a 1-sec­ond evap­o­ra­tion time is equal to 2.28e8 grams[3a] i.e. 250 tons, and the re­sult­ing ra­dius is r = 2 × G × M ÷ c^2 is 3.39e-22 me­ters[3b], or about 0.4 mil­lionths of a pro­ton ra­dius[3c]. That one-sec­ond-du­ra­tion black hole, de­spite be­ing tiny, is vastly larger than the ones that might be cre­ated by LHC -- 10^28 larger in fact[3d]. (FWIW, the Sch­warzschild ra­dius calcu­la­tion re­lies only on GR, with no quan­tum stuff, while the time-to-evap­o­rate calcu­la­tion de­pends on some ba­sic QM as well. String The­ory and the Stan­dard Model both leave that par­tic­u­lar bit of QM un­touched.)

[3a] Google Calcu­la­tor: “(((1 s) h c^4) /​ (2pi 5120pi G^2)) ^ (1/​3) in grams” [3b] Google Calcu­la­tor: “2 G 2.28e8 grams /​ c^2 in me­ters” [3c] Google Calcu­la­tor: “3.3856695e-22 m /​ 0.8768 fem­tome­ters”, where 0.8768 fem­tome­ters is the ex­per­i­men­tally ac­cepted charge ra­dius of a pro­ton [3d] Google Calcu­la­tor: “(2.28e8 g * c^2) /​ 14 TeV”, where 14 TeV is the LHC’s max­i­mum en­ergy (7 TeV per beam in a head-on pro­ton-pro­ton col­li­sion)

• Fi­nally, con­sider the ques­tion of whether you can as­sign 100% cer­tainty to a math­e­mat­i­cal the­o­rem for which a proof exists

To ground this is­sue in more con­crete terms, imag­ine you are writ­ing an al­gorithm to com­press images made up of 8-bit pix­els. The al­gorithm plows through sev­eral rows un­til it comes to a pixel, and pre­dicts that the dis­tri­bu­tion of that pixel is Gaus­sian with mean of 128 and var­i­ance of .1. Then the model prob­a­bil­ity that the real value of the pixel is 255 is some as­tro­nom­i­cally small num­ber—but the sys­tem must re­serve some prob­a­bil­ity (and thus codespace) for that out­come. If it does not, then it vi­o­lates the gen­eral con­tract that a lossless com­pres­sion al­gorithm should as­sign a code to any in­put, though some in­puts will end up be­ing in­flated. In other words it risks break­ing.

On the other hand, it is com­pletely rea­son­able that it should as­sign zero prob­a­bil­ity to the out­come that the pixel value is 300. That all pix­els val­ues fall be­tween 0 and 255 is a de­duc­tive con­se­quence of the prob­lem defi­ni­tion.

• The ar­gu­ment was that since cos­mic rays have been perform­ing par­ti­cle col­li­sions similar to the LHC’s zillions of times per year, the chance that the LHC will de­stroy the world is ei­ther liter­ally zero,

This ar­gu­ment doesn’t work for an­thropic rea­sons. It could be that in the vast ma­jor­ity of Everett branches Earth was wiped out by cos­mic ray col­li­sions.

• An­thropic rea­son­ing only goes this far. Even if I ac­cept the silli­ness in which zillion of Earths are de­stroyed ev­ery year for each one that sur­vives… the other planets in the so­lar sys­tem could also have been de­stroyed. And the stars and galax­ies in the sky would all be de­voured by now, no? And no an­thropic rea­sons would pre­vent us from wit­ness­ing that from a safe dis­tance.

Here’s a fun game: Try to dis­prove the hy­poth­e­sis that ev­ery sin­gle time some­one says “Abra­cadabra” there’s a 99.99% chance that the world gets de­stroyed.

• Here’s a fun game: Try to dis­prove the hy­poth­e­sis that ev­ery sin­gle time some­one says “Abra­cadabra” there’s a 99.99% chance that the world gets de­stroyed.

We haven’t been an­throp­i­cally forced into a world where hu­mans can’t say “Abra­cadabra”.

• Here’s a fun game: Try to dis­prove the hy­poth­e­sis that ev­ery sin­gle time some­one says “Abra­cadabra” there’s a 99.99% chance that the world gets de­stroyed.

We haven’t been an­throp­i­cally forced into a world where hu­mans can’t say “Abra­cadabra”.

Oh, but a non-triv­ial num­ber of peo­ple have mild su­per­sti­tions against say­ing “Abra­cadabra”. Does this not con­sti­tute (weak) an­thropic ev­i­dence?

• You’ve just mur­dered 99.99% of all Earths. Our Everett branch sur­vived for an­thropic rea­sons.

• This is to­tally testable. I’m go­ing to down­load some raw quan­tum noise. If the first byte isn’t FF I will say the magic word. I will then re­port back what the first byte was.

Up­date: the first byte was 1B

...

Still here.

• Ini­tially this was an­thropic ev­i­dence for nor­mal­ity, un­til peo­ple would have had time to repli­cate the ex­per­i­ment. Sup­pose the word was that dan­ger­ous, and the first byte had been FF. By now, all the peo­ple repli­cat­ing the ex­per­i­ment have de­stroyed those uni­verses. Only the uni­verses where the ex­per­i­ment failed to show FF on the first try are still around.

• Which means we have to cut down on the wor­lds where FF didn’t hap­pen. Say it with me ev­ery­one.

If ev­ery­one who reads this com­ments says the word say, thirty times, we should be good, right?

• At what point would you have ac­cepted that say­ing “Abra­cadabra” does de­stroy the world? How would you have felt about that? And what ser­vice have you been us­ing? I only know about ran­dom.org. Thanks.

ETA:

• HotBits gen­er­ates ran­dom num­bers from ra­dioac­tive de­cay.

• QRBG Quan­tum Ran­dom Bit Generator

• I used this one. After two FFs I would have de­cided I was in a simu­la­tion which some Less Wrong poster had set up post-sin­gu­lar­ity to screw with us. Those kind of Carte­sian Joker sce­nar­ios are way more prob­a­ble than “Abra­cadabra” de­stroy­ing the world…

• Just two FFs? That doesn’t seem all that im­prob­a­ble even for­get­ting all thought of world de­struc­tion. After about 100 FFs I would sus­pect that there was a prob­lem with my ex­per­i­men­tal pro­ce­dure (eg. in­ter­net quan­tum byte source bro­ken). That too would be testable. (“I’m not go­ing to say Abra­cadabra this time. FF? FF? Now I am. FF? FF?)

• Well two FFs by chance is 1 in 65536. And my prior for “I’m in a simu­la­tion” isn’t that low. You’re right about the ser­vice be­ing bro­ken or fraud­u­lent and re­ally right about need­ing to test what hap­pens if I don’t say Abra­cadabra. But you definitely don’t have to wait for 100 FFs!

• Well two FFs by chance is 1 in 65536. And my prior for “I’m in a simu­la­tion” isn’t that low.

That isn’t the num­ber to con­sider here. The rele­vant prior is “I’m in a simu­la­tion and this par­tic­u­lar simu­la­tion in­volves the abra­cadabra trick”. That num­ber is quite a bit lower!

You’re right about the ser­vice be­ing bro­ken or fraud­u­lent and re­ally right about need­ing to test what hap­pens if I don’t say Abra­cadabra. But you definitely don’t have to wait for 100 FFs!

True enough. I es­ti­mate that I’d start test­ing af­ter 4 or 5. :)

• That isn’t the num­ber to con­sider here. The rele­vant prior is “I’m in a simu­la­tion and this par­tic­u­lar simu­la­tion in­volves the abra­cadabra trick”. That num­ber is quite a bit lower!

Yeah. Hmm. I don’t re­ally have a sta­ble es­ti­mate of that prob­a­bil­ity. Of course, it’s not like like I would have stopped af­ter two tri­als, but at that point I’m por­ing my­self a drink. Worth not­ing that by com­ing up with the hy­poth­e­sis I dras­ti­cally in­creased its prob­a­bil­ity and then by men­tion­ing it here I in­creased it’s prob­a­bil­ity even fur­ther.

I es­ti­mate that I’d start test­ing af­ter 4 or 5. :)

Would you mind at­tempt­ing to nar­rate any in­ter­nal di­a­log you’d imag­ine your­self hav­ing af­ter the 3rd? Lol.

• Would you mind at­tempt­ing to nar­rate any in­ter­nal di­a­log you’d imag­ine your­self hav­ing af­ter the 3rd? Lol.

“Um. WTF? Is this even work­ing?”

(Yes, since the test is so triv­ial I might even click through a test af­ter 2. I just wouldn’t start sus­pect­ing mod­ded sims.)

• After two FFs I would have de­cided I was in a simu­la­tion which some Less Wrong poster had set up post-sin­gu­lar­ity to screw with us.

Really?

• Well chance is 1 in 65,536. Is there some hy­poth­e­sis I’ve ne­glected?

• The per­son run­ning the qrng server de­cided to screw with you.

• Damn!

• And no an­thropic rea­sons would pre­vent us from wit­ness­ing that from a safe dis­tance.

I ac­cept this counter-ar­gu­ment.

Try to dis­prove the hy­poth­e­sis that ev­ery sin­gle time some­one says “Abra­cadabra” there’s a 99.99% chance that the world gets de­stroyed.

This is un­likely be­cause it is wildly in­com­pat­i­ble with ev­ery­thing we know about physics, not be­cause we have never ob­served it to hap­pen. It is un­likely be­cause it has an ex­tremely low prior prob­a­bil­ity, not be­cause we have any (di­rect) ev­i­dence against it.

• I should like to know Yvain’s prior on this.

• On the “abra­cadabra” ex­am­ple? The over­whelming ma­jor­ity would come from the pos­si­bil­ity that any time any­thing what­so­ever hap­pens the world is “de­stroyed”, for some weird, maybe an­thropic use of the word “de­stroyed” I don’t un­der­stand com­pat­i­ble with me still be­ing here.

If we limit it to “abra­cadabra” and noth­ing else, that’s com­plex enough that < 1/​trillion just pick­ing it out of hy­poth­e­sis space (lots of com­bi­na­tions of sounds that could de­stroy the world, lots of things that aren’t com­bi­na­tions of sounds).

• Just the world? Well, all you need is a good rocket ship so you aren’t on it any­more, and take a look.

If you mean de­stroy the MW branch in which it’s said, then Nick Tar­leton’s an­swer works—that rule would make the choice to say ‘Abra­cadabra’ far smaller in prob­a­bil­ity than say­ing similar things that don’t de­stroy the world. Peo­ple say­ing that one thing would be greatly sup­pressed rel­a­tive to, say, “Alakazam” or “Poof” or “Presto Change-o”, and it would quickly leave the lex­i­con.

• In­deed—none of us would have ever heard it.

• Per­haps rather than just caus­ing a black hole, it causes a tear in space-time that ex­pands at the speed of light. By the time you see it, you’re already dead.

Of course, there’s still the fact that early wor­lds would be weighted much more heav­ily, so this is prob­a­bly about the first in­stant that you ex­ist. And there’s the fact that, if that’s true, the LHC wouldn’t de­crease the ex­pected life­time of the world by a no­tice­able amount.

• Per­haps rather than just caus­ing a black hole, it causes a tear in space-time that ex­pands at the speed of light. By the time you see it, you’re already dead.

I feel vaguely dis­ap­prov­ing of an­thropic rea­son­ing when it re­wards elab­o­rate and con­trived sce­nar­ios over sim­pler ones with similar char­ac­ter­is­tics.

• There are some in­ter­est­ing replies here.

• This post raises very similar is­sues to those dis­cussed in com­ments here.