# When (Not) To Use Probabilities

It may come as a sur­prise to some read­ers of this blog, that I do not always ad­vo­cate us­ing prob­a­bil­ities.

Or rather, I don’t always ad­vo­cate that hu­man be­ings, try­ing to solve their prob­lems, should try to make up ver­bal prob­a­bil­ities, and then ap­ply the laws of prob­a­bil­ity the­ory or de­ci­sion the­ory to what­ever num­ber they just made up, and then use the re­sult as their fi­nal be­lief or de­ci­sion.

The laws of prob­a­bil­ity are laws, not sug­ges­tions, but of­ten the true Law is too difficult for us hu­mans to com­pute. If P != NP and the uni­verse has no source of ex­po­nen­tial com­put­ing power, then there are ev­i­den­tial up­dates too difficult for even a su­per­in­tel­li­gence to com­pute—even though the prob­a­bil­ities would be quite well-defined, if we could af­ford to calcu­late them.

So some­times you don’t ap­ply prob­a­bil­ity the­ory. Espe­cially if you’re hu­man, and your brain has evolved with all sorts of use­ful al­gorithms for un­cer­tain rea­son­ing, that don’t in­volve ver­bal prob­a­bil­ity as­sign­ments.

Not sure where a fly­ing ball will land? I don’t ad­vise try­ing to for­mu­late a prob­a­bil­ity dis­tri­bu­tion over its land­ing spots, perform­ing de­liber­ate Bayesian up­dates on your glances at the ball, and calcu­lat­ing the ex­pected util­ity of all pos­si­ble strings of mo­tor in­struc­tions to your mus­cles.

Try­ing to catch a fly­ing ball, you’re prob­a­bly bet­ter off with your brain’s built-in mechanisms, then us­ing de­liber­a­tive ver­bal rea­son­ing to in­vent or ma­nipu­late prob­a­bil­ities.

But this doesn’t mean you’re go­ing be­yond prob­a­bil­ity the­ory or above prob­a­bil­ity the­ory.

The Dutch Book ar­gu­ments still ap­ply. If I offer you a choice of gam­bles (\$10,000 if the ball lands in this square, ver­sus \$10,000 if I roll a die and it comes up 6), and you an­swer in a way that does not al­low con­sis­tent prob­a­bil­ities to be as­signed, then you will ac­cept com­bi­na­tions of gam­bles that are cer­tain losses, or re­ject gam­bles that are cer­tain gains...

Which still doesn’t mean that you should try to use de­liber­a­tive ver­bal rea­son­ing. I would ex­pect that for pro­fes­sional base­ball play­ers, at least, it’s more im­por­tant to catch the ball than to as­sign con­sis­tent prob­a­bil­ities. In­deed, if you tried to make up prob­a­bil­ities, the ver­bal prob­a­bil­ities might not even be very good ones, com­pared to some gut-level feel­ing—some word­less rep­re­sen­ta­tion of un­cer­tainty in the back of your mind.

There is noth­ing priv­ileged about un­cer­tainty that is ex­pressed in words, un­less the ver­bal parts of your brain do, in fact, hap­pen to work bet­ter on the prob­lem.

And while ac­cu­rate maps of the same ter­ri­tory will nec­es­sar­ily be con­sis­tent among them­selves, not all con­sis­tent maps are ac­cu­rate. It is more im­por­tant to be ac­cu­rate than to be con­sis­tent, and more im­por­tant to catch the ball than to be con­sis­tent.

In fact, I gen­er­ally ad­vise against mak­ing up prob­a­bil­ities, un­less it seems like you have some de­cent ba­sis for them. This only fools you into be­liev­ing that you are more Bayesian than you ac­tu­ally are.

To be spe­cific, I would ad­vise, in most cases, against us­ing non-nu­mer­i­cal pro­ce­dures to cre­ate what ap­pear to be nu­mer­i­cal prob­a­bil­ities. Num­bers should come from num­bers.

Now there are benefits from try­ing to trans­late your gut feel­ings of un­cer­tainty into ver­bal prob­a­bil­ities. It may help you spot prob­lems like the con­junc­tion fal­lacy. It may help you spot in­ter­nal in­con­sis­ten­cies—though it may not show you any way to rem­edy them.

But you shouldn’t go around think­ing that, if you trans­late your gut feel­ing into “one in a thou­sand”, then, on oc­ca­sions when you emit these ver­bal words, the cor­re­spond­ing event will hap­pen around one in a thou­sand times. Your brain is not so well-cal­ibrated. If in­stead you do some­thing non­ver­bal with your gut feel­ing of un­cer­tainty, you may be bet­ter off, be­cause at least you’ll be us­ing the gut feel­ing the way it was meant to be used.

This spe­cific topic came up re­cently in the con­text of the Large Hadron Col­lider, and an ar­gu­ment given at the Global Catas­trophic Risks con­fer­ence:

That we couldn’t be sure that there was no er­ror in the pa­pers which showed from mul­ti­ple an­gles that the LHC couldn’t pos­si­bly de­stroy the world. And more­over, the the­ory used in the pa­pers might be wrong. And in ei­ther case, there was still a chance the LHC could de­stroy the world. And there­fore, it ought not to be turned on.

Now if the ar­gu­ment had been given in just this way, I would not have ob­jected to its episte­mol­ogy.

But the speaker ac­tu­ally pur­ported to as­sign a prob­a­bil­ity of at least 1 in 1000 that the the­ory, model, or calcu­la­tions in the LHC pa­per were wrong; and a prob­a­bil­ity of at least 1 in 1000 that, if the the­ory or model or calcu­la­tions were wrong, the LHC would de­stroy the world.

After all, it’s surely not so im­prob­a­ble that fu­ture gen­er­a­tions will re­ject the the­ory used in the LHC pa­per, or re­ject the model, or maybe just find an er­ror. And if the LHC pa­per is wrong, then who knows what might hap­pen as a re­sult?

So that is an ar­gu­ment—but to as­sign num­bers to it?

I ob­ject to the air of au­thor­ity given these num­bers pul­led out of thin air. I gen­er­ally feel that if you can’t use prob­a­bil­is­tic tools to shape your feel­ings of un­cer­tainty, you ought not to dig­nify them by call­ing them prob­a­bil­ities.

The al­ter­na­tive I would pro­pose, in this par­tic­u­lar case, is to de­bate the gen­eral rule of ban­ning physics ex­per­i­ments be­cause you can­not be ab­solutely cer­tain of the ar­gu­ments that say they are safe.

I hold that if you phrase it this way, then your mind, by con­sid­er­ing fre­quen­cies of events, is likely to bring in more con­se­quences of the de­ci­sion, and re­mem­ber more rele­vant his­tor­i­cal cases.

If you de­bate just the one case of the LHC, and as­sign spe­cific prob­a­bil­ities, it (1) gives very shaky rea­son­ing an un­due air of au­thor­ity, (2) ob­scures the gen­eral con­se­quences of ap­ply­ing similar rules, and even (3) cre­ates the illu­sion that we might come to a differ­ent de­ci­sion if some­one else pub­lished a new physics pa­per that de­creased the prob­a­bil­ities.

The au­thors at the Global Catas­trophic Risk con­fer­ence seemed to be sug­gest­ing that we could just do a bit more anal­y­sis of the LHC and then switch it on. This struck me as the most dis­in­gen­u­ous part of the ar­gu­ment. Once you ad­mit the ar­gu­ment “Maybe the anal­y­sis could be wrong, and who knows what hap­pens then,” there is no pos­si­ble physics pa­per that can ever get rid of it.

No mat­ter what other physics pa­pers had been pub­lished pre­vi­ously, the au­thors would have used the same ar­gu­ment and made up the same nu­mer­i­cal prob­a­bil­ities at the Global Catas­trophic Risk con­fer­ence. I can­not be sure of this state­ment, of course, but it has a prob­a­bil­ity of 75%.

In gen­eral a ra­tio­nal­ist tries to make their minds func­tion at the best achiev­able power out­put; some­times this in­volves talk­ing about ver­bal prob­a­bil­ities, and some­times it does not, but always the laws of prob­a­bil­ity the­ory gov­ern.

If all you have is a gut feel­ing of un­cer­tainty, then you should prob­a­bly stick with those al­gorithms that make use of gut feel­ings of un­cer­tainty, be­cause your built-in al­gorithms may do bet­ter than your clumsy at­tempts to put things into words.

Now it may be that by rea­son­ing thusly, I may find my­self in­con­sis­tent. For ex­am­ple, I would be sub­stan­tially more alarmed about a lot­tery de­vice with a well-defined chance of 1 in 1,000,000 of de­stroy­ing the world, than I am about the Large Hadron Col­lider be­ing switched on.

On the other hand, if you asked me whether I could make one mil­lion state­ments of au­thor­ity equal to “The Large Hadron Col­lider will not de­stroy the world”, and be wrong, on av­er­age, around once, then I would have to say no.

What should I do about this in­con­sis­tency? I’m not sure, but I’m cer­tainly not go­ing to wave a magic wand to make it go away. That’s like find­ing an in­con­sis­tency in a pair of maps you own, and quickly scrib­bling some al­ter­a­tions to make sure they’re con­sis­tent.

I would also, by the way, be sub­stan­tially more wor­ried about a lot­tery de­vice with a 1 in 1,000,000,000 chance of de­stroy­ing the world, than a de­vice which de­stroyed the world if the Judeo-Chris­tian God ex­isted. But I would not sup­pose that I could make one billion state­ments, one af­ter the other, fully in­de­pen­dent and equally fraught as “There is no God”, and be wrong on av­er­age around once.

I can’t say I’m happy with this state of epistemic af­fairs, but I’m not go­ing to mod­ify it un­til I can see my­self mov­ing in the di­rec­tion of greater ac­cu­racy and real-world effec­tive­ness, not just mov­ing in the di­rec­tion of greater self-con­sis­tency. The goal is to win, af­ter all. If I make up a prob­a­bil­ity that is not shaped by prob­a­bil­is­tic tools, if I make up a num­ber that is not cre­ated by nu­mer­i­cal meth­ods, then maybe I am just defeat­ing my built-in al­gorithms that would do bet­ter by rea­son­ing in their na­tive modes of un­cer­tainty.

Of course this is not a li­cense to ig­nore prob­a­bil­ities that are well-founded. Any nu­mer­i­cal found­ing at all is likely to be bet­ter than a vague feel­ing of un­cer­tainty; hu­mans are ter­rible statis­ti­ci­ans. But pul­ling a num­ber en­tirely out of your butt, that is, us­ing a non-nu­mer­i­cal pro­ce­dure to pro­duce a num­ber, is nearly no foun­da­tion at all; and in that case you prob­a­bly are bet­ter off stick­ing with the vague feel­ings of un­cer­tainty.

Which is why my Over­com­ing Bias posts gen­er­ally use words like “maybe” and “prob­a­bly” and “surely” in­stead of as­sign­ing made-up nu­mer­i­cal prob­a­bil­ities like “40%” and “70%” and “95%”. Think of how silly that would look. I think it ac­tu­ally would be silly; I think I would do worse thereby.

I am not the kind of straw Bayesian who says that you should make up prob­a­bil­ities to avoid be­ing sub­ject to Dutch Books. I am the sort of Bayesian who says that in prac­tice, hu­mans end up sub­ject to Dutch Books be­cause they aren’t pow­er­ful enough to avoid them; and more­over it’s more im­por­tant to catch the ball than to avoid Dutch Books. The math is like un­der­ly­ing physics, in­escapably gov­ern­ing, but too ex­pen­sive to calcu­late. Nor is there any point in a rit­ual of cog­ni­tion which mimics the sur­face forms of the math, but fails to pro­duce sys­tem­at­i­cally bet­ter de­ci­sion-mak­ing. That would be a lost pur­pose; this is not the true art of liv­ing un­der the law.

• I would ad­vise, in most cases, against us­ing non-nu­mer­i­cal pro­ce­dures to cre­ate what ap­pear to be nu­mer­i­cal prob­a­bil­ities. Num­bers should come from num­bers.

I very much dis­agree with this quote, and much of the rest of the post. Most of our rea­son­ing about so­cial stuff does not start from con­crete num­bers, so this rule would for­bid my giv­ing num­bers to most of what I rea­son about. I say go ahead and pick a num­ber out of the air, but then be very will­ing to re­vise it upon the slight­est ev­i­dence that it doesn’t fit will with your other num­bers. It is an­chor­ing that is the biggest prob­lem. Be­ing forced to pick num­bers can be a great and pow­er­ful dis­ci­pline to help you find and elimi­nate er­rors in your rea­son­ing.

• Re­cently I did some prob­a­bil­ity calcu­la­tions, start­ing with “made-up” num­bers, and up­dat­ing us­ing Bayes’ Rule, and the re­sult was that some­thing would likely hap­pen which my gut said most firmly would ab­solutely not, never, ever, hap­pen.

I told my­self that my prob­a­bil­ity as­sign­ments must have been way off, or I must have made an er­ror some­where. After all, my gut couldn’t pos­si­bly be so mis­taken.

The thing hap­pened, by the way.

This is one rea­son why I agree with RI, and dis­agree with Eliezer.

• Could you give more de­tails, at least about the do­main in which you were rea­son­ing? In­tu­itions vary wildly in cal­ibra­tion with topic.

• What’s the prob­a­bil­ity the LHC will save the world? That ei­ther some side effect of run­ning it, or some knowl­edge gained from it, will pre­vent a fu­ture catas­tro­phe? At least of the same or­der of fuzzy small non-ze­roness as the dooms­day sce­nario.

I think that’s the larger fault here. You don’t just have to show that X has some chance of be­ing bad in or­der to jus­tify be­ing against it, you also have to show it it’s pre­dictably worse than not-X. If you can’t, then the un­cer­tain bad­ness is bet­ter read as noise at the strain­ing limit of your abil­ity to pre­dict—and that to me adds back up to nor­mal­ity.

• I think that Robin was say­ing that an­chor­ing, not the ar­bi­trari­ness of start­ing points, is the big prob­lem for tran­si­tion­ing from qual­i­ta­tive to quan­ti­ta­tive think­ing. You can make up num­bers and so long as you up­date them well you get to the right place, but if you an­chor too much against move­ment from your start­ing point but not against move­ment to­wards it you never get to an ac­cu­rate des­ti­na­tion.

• I strongly agree with Robin here. Thanks Robin for mak­ing the point so clearly. I have to ad­mit that not us­ing num­bers may be a bet­ter rule for a larger num­ber of peo­ple than what they are cur­rently us­ing, as is ma­jori­tar­i­anism, but nei­ther is a good rule for peo­ple who are try­ing to reach the best available be­liefs.

• Hardly the most profound ad­den­dum, I know, but dummy num­bers can be use­ful for illus­tra­tive pur­poses—for in­stance, to show how steeply prob­a­bil­ities de­cline as claims are con­joined.

• I say go ahead and pick a num­ber out of the air,

A some­what ar­bi­trary start­ing num­ber is also use­ful as a seed for a pro­cess of iter­a­tive ap­prox­i­ma­tion to a true value.

• Eliezer, the money pump re­sults from cir­cu­lar prefer­ences, which should ex­ist ac­cord­ing to your de­scrip­tion of the in­con­sis­tency. Sup­pose we have a mil­lion state­ments, each of which you be­lieve to be true with equal con­fi­dence, one of which is “The LHC will not de­stroy the earth.”

Sup­pose I am about to pick a ran­dom state­ment from the list of a mil­lion, and I will de­stroy the earth if I hap­pen to pick a false state­ment. By your own ad­mis­sion, you es­ti­mate that there is more than one false state­ment in the list. You will there­fore pre­fer that I play a lot­tery with odds of 1 in a mil­lion, de­stroy­ing the earth only if I win.

It makes no differ­ence if I pick a num­ber ran­domly be­tween one and a mil­lion, and then play the lot­tery men­tioned (ig­nor­ing the num­ber picked.)

But now if I pick a num­ber ran­domly be­tween one and a mil­lion, and then play the lot­tery men­tioned only if I didn’t pick the num­ber 500,000, while if I do pick the num­ber 500,000, I de­stroy the earth only if the LHC would de­stroy the earth, then you would pre­fer this state of af­fairs, since you pre­fer “de­stroy the earth if the LHC would de­stroy the earth” to “de­stroy the earth with odds of one in a mil­lion.”

But now I can also sub­sti­tute the num­ber 499,999 with some other state­ment that you hold with equal con­fi­dence, so that if I pick 499,999, in­stead of play­ing the lot­tery, I de­stroy the earth if this state­ment is false. You will also pre­fer this state of af­fairs for the same rea­son, since you hold this state­ment with equal con­fi­dence to “The LHC will not de­stroy the earth.”

And so on. It fol­lows that you pre­fer to go back to the origi­nal state of af­fairs, which con­sti­tutes cir­cu­lar prefer­ences and im­plies a money pump.

• Can’t give de­tails, there would be a risk of re­veal­ing my iden­tity.

I have come up with a hy­poth­e­sis to ex­plain the in­con­sis­tency. Eliezer’s ver­bal es­ti­mate of how many similar claims he can make, while be­ing wrong on av­er­age only once, is ac­tu­ally his best es­ti­mate of his sub­jec­tive un­cer­tainty. How he would act in re­la­tion to the lot­tery is his es­ti­mate in­fluenced by the over­con­fi­dence bias. This is an in­ter­est­ing hy­poth­e­sis be­cause it would provide a mea­sure­ment of his over­con­fi­dence. For ex­am­ple, which would he stop: The “De­stroy the earth if God ex­ists” lot­tery, or “De­stroy the earth at odds of one in a trillion”? How about a quadrillion? A quin­til­lion? A google­plex? One in Gra­ham’s num­ber? At some point Eliezer will have to pre­fer to turn off the God lot­tery, and com­par­ing this to some­thing like one in a billion, his ver­bal es­ti­mate, would tell us ex­actly how over­con­fi­dent he is.

Since the in­con­sis­tency would al­low Eliezer to be­come a money-pump, Eliezer has to ad­mit that some ir­ra­tional­ity must be re­spon­si­ble for it. I as­sign at least a 1% chance to the pos­si­bil­ity that the above hy­poth­e­sis is true. Given even such a chance, and given Eliezer’s work, he should come up with meth­ods to test the hy­poth­e­sis, and if it is con­firmed, he should change his way of act­ing in or­der to con­form with his ac­tual best es­ti­mate of re­al­ity, rather than his over­con­fi­dent es­ti­mate of re­al­ity.

Un­for­tu­nately, if the hy­poth­e­sis is true, by that very fact, Eliezer is un­likely to take these steps. Deter­min­ing why can be left as an ex­er­cise to the reader.

• If all you have is a gut feel­ing of un­cer­tainty, then you should prob­a­bly stick with those al­gorithms that make use of gut feel­ings of un­cer­tainty, be­cause your built-in al­gorithms may do bet­ter than your clumsy at­tempts to put things into words.

I would like to add some­thing to this. Your gut feel­ing is of course the sum of ex­pe­rience you have had in this life plus your evolu­tion­ary her­i­tage. This may not be ver­bal­ized be­cause your gut feel­ing (as an ex­am­ple) also in­cludes sin­gle neu­rons firing which don’t nec­es­sar­ily con­tribute to the sta­bil­ity of a con­cept in your mind.

But I warn against then sim­ply fol­low­ing one’s gut feel­ing; of course, if you have to de­cide im­me­di­ately (in an emer­gency), there is no al­ter­na­tive. Do it! You can’t get bet­ter than the sum of your ex­pe­rience in that mo­ment.

But usu­ally only hav­ing a gut feel­ing and not be­ing able to ver­bal­ize should mean one thing for you: Go out and gather more in­for­ma­tion! (Read books to sta­bi­lize or cre­ate con­cepts in your mind; do ex­per­i­ments; etc etc)

You will find that gut feel­ings can change quite dra­mat­i­cally af­ter read­ing a good book on a sub­ject. So why should you trust them if you have the time to do some­thing about them, viz. trans­fer them into the sym­bol space of your mind so the con­cepts are available for higher-or­der rea­son­ing?

I’d like to add though, that the origi­nal phrase was “al­gorithms that make use of gut feel­ings… ”. This isn’t the same as say­ing “a policy of always sub­mit­ting to your gut feel­ings”.

I’m pic­tur­ing a de­ci­sion tree here: some­thing that tells you how to be­have when your gut feel­ing is “I’m ut­terly con­vinced” {Act on the feel­ing im­me­di­ately}, vs how you might act if you had feel­ings of “vague un­ease” {con­tinue cau­tiously, de­lay tak­ing any steps that con­sti­tute a ma­jor com­mit­ment, while you try to iden­tify the source of the un­ease}. Your al­gorithm might also in­volve as­sess­ing the re­li­a­bil­ity of your gut feel­ing; ex­pe­rience and rea­son might al­low you to know that your gut is very re­li­able in cer­tain mat­ters, and much less re­li­able in oth­ers.

The de­tails of the al­gorithm are up for de­bate of course. For the pur­poses of this dis­cus­sion, i place no im­por­tance on the de­tails of the al­gorithm i de­scribed. The point is just that these pro­ce­dures are helpful for ra­tio­nal think­ing, they aren’t nu­mer­i­cal pro­ce­dures, and a nu­mer­i­cal pro­ce­dure wouldn’t au­to­mat­i­cally be bet­ter just be­cause it’s nu­mer­i­cal.

• In the sen­tence “Try­ing to catch a fly­ing ball, you’re prob­a­bly bet­ter off with your brain’s built-in mechanisms, then us­ing de­liber­a­tive ver­bal rea­son­ing to in­vent or ma­nipu­late prob­a­bil­ities,” I think you meant “than” rather than “then”?

• if I make up a num­ber that is not cre­ated by nu­mer­i­cal meth­ods, then maybe I am just defeat­ing my built-in al­gorithms that would do bet­ter by rea­son­ing in their na­tive modes of un­cer­tainty.

I must re­mem­ber this post. I ar­gue along those lines from time to time, though I’m pretty sure I think hu­mans are much worse at math (and bet­ter at judge­ment) than you do, so I recom­mend against talk­ing in prob­a­bil­ities more of­ten.

• I sus­pect my state­ment is the one that needed clar­ifi­ca­tion. I was mea­sur­ing the size of a prob­lem by the psy­cholog­i­cal difficulty of over­com­ing it. If an­chor­ing is too big to over­come, it is bet­ter to avoid situ­a­tions where it ap­plies. And iden­ti­fy­ing the bias is not (nec­es­sar­ily) much of a step to­wards over­com­ing it.

• Some­one ac­tu­ally bought Pas­cal’s wa­ger? Oh boy. That es­say looks to me like a perfect ex­am­ple of some­one pul­ling oh-so-con­ve­nient num­bers out of their fun­da­ment and then up­dat­ing on them. See, it’s math, I’m not delu­sional. sigh

• Was this speaker a be­liever in Disc­wor­l­dian prob­a­bil­ity the­ory? Which states, of course, that mil­lion-to-one chances come up 100% of the time, but thou­sand-to-one chances never. Maybe those num­bers weren’t plucked out of the air.

All we have to do is op­er­ate the LHC while stand­ing on one foot, and the prob­a­bil­ity of the uni­verse ex­plod­ing will be nudged away from mil­lion-to-one (doesn’t mat­ter which di­rec­tion—who­ever heard of a 999,999-1 chance com­ing up?) and the uni­verse will be saved.

• Un­known: I would REALLY like to know de­tails.

Gun­ther Greindl: In my gut, I STRONGLY agree. My re­vealed prefer­ences also match it. How­ever, Philip Tet­locks’ “Ex­pert Poli­ti­cal Judg­ment” tells me that among poli­ti­cal ex­perts, who have much bet­ter pre­dic­tive pow­ers than ed­u­cated lay-peo­ple, spe­cial­ists in X don’t out­perform spe­cial­ists in Y in mak­ing pre­dic­tions about X. This wor­ries me A LOT. Another thing that wor­ries me is that de­com­pos­ing events ex­haus­tively into their sub­com­po­nents makes the ag­gre­gate event seem more likely and it seems to me that by be­com­ing an ex­pert you come to au­to­mat­i­cally de­com­pose events into their sub­com­po­nents.

Eliezer: I am pretty con­fi­dent that it would be pos­si­ble in prin­ci­ple, though not due to time con­straints, to make a billion state­ments and get none wrong while keep­ing cor­re­la­tions fairly low.

• “The lot­tery would defi­nately de­stroy wor­lds, with as many deaths as kil­ling over six thou­sand peo­ple in each Everett branch.”

We speak so ca­su­ally about in­ter­pret­ing prob­a­bil­ities as fre­quen­cies across the many wor­lds, but I would sug­gest we need a rigor­ous treat­ment of what those other wor­lds are pro­por­tion­ally like be­fore con­fi­dently do­ing so. (Cf. mine and Hal’s com­m­ments in the June Open Thread.)

• great post.

• The al­ter­na­tive I would pro­pose, in this par­tic­u­lar case, is to de­bate the gen­eral rule of ban­ning physics ex­per­i­ments be­cause you can­not be ab­solutely cer­tain of the ar­gu­ments that say they are safe.

Giv­ing up on de­bat­ing the prob­a­bil­ity of a par­tic­u­lar propo­si­tion, and shift­ing to de­bat­ing the mer­its of a par­tic­u­lar rule, is I feel one of the ideas be­hind fre­quen­tist statis­tics. Like, I’m not go­ing to say any­thing about whether the true mean is in my con­fi­dence in­ter­val in this par­tic­u­lar case. But note that us­ing this con­fi­dence in­ter­val for­mula works pretty well on av­er­age.

• One of my fa­vorite les­sons from Bayesi­anism is that the task of calcu­lat­ing the prob­a­bil­ity of an event can be bro­ken down into sim­pler calcu­la­tions, so that even if you have no ba­sis for as­sign­ing a num­ber to P(H) you might still have suc­cess es­ti­mat­ing the like­li­hood ra­tio.

• How is that in­for­ma­tion by it­self use­ful?

• Good ques­tion. I didn’t have an an­swer right away. I think it’s use­ful be­cause it gives struc­ture to the act of up­dat­ing be­liefs. When I en­counter ev­i­dence for some H I im­me­di­ately know to es­ti­mate P(E|H) and P(E|~H) and I know that this ra­tio alone de­ter­mines the di­rec­tion and de­gree of the up­date. Even if the num­bers are vague and ad hoc this struc­ture pre­cludes a lot of clever ar­gu­ing I could be do­ing, leads to pro­duc­tive lines of in­quiry, and is im­mensely helpful for mod­el­ing my dis­agree­ment with oth­ers. Be­fore read­ing LW I could have told you, if asked, that P(H), P(E|H), and P(E|~H) were worth con­sid­er­ing; but be­com­ing acutely aware that these are THE three quan­tities I need, no more and no less, has made a huge differ­ence in my think­ing for the bet­ter (not to sound dog­matic; I’ll use differ­ent paradigms when I think they’re more ap­pro­pri­ate e.g. when do­ing math).

• Re: Some­one ac­tu­ally bought Pas­cal’s wa­ger? Oh boy.

E.g. see: Di­nesh D’Souza, 8 min­utes in.

• I’ll also dis­agree with the ar­gu­ment Eliezer gives here. See Robin’s post. In ad­di­tion to com­ing up with a prob­a­bil­ity with which we think an event will oc­cur, we should also quan­tify how sure we are that that is the best pos­si­ble es­ti­mate of the prob­a­bil­ity.

e.g. I can calcu­late the odds I’ll win a lot­tery and if some­one thinks their es­ti­mate of the odds is much bet­ter then (if we lack time or cap­i­tal con­straints) we can ar­range bets about whose pre­dic­tions will prove more ac­cu­rate over many lot­ter­ies.

• athmwiji, yes, num­bers are not nec­es­sary for an­chor­ing. I think that they make the an­chor­ing worse, but it would be very bad to avoid num­bers just be­cause they make it easy to see an­chor­ing.

• Me: There’s more than just P != NP that defeats try­ing to catch a fly­ing ball by pre­dict­ing where it will land and go­ing there. Or, for that mat­ter, try­ing to go there by com­put­ing a se­ries of mus­cu­lar ac­tions and then do­ing them.

Cale­do­nian: You DO re­al­ize that some hu­mans are perfectly ca­pa­ble of ac­com­plish­ing pre­cisely that ac­tion, right?

Peo­ple can catch balls. No­body can do it by the mechanism de­scribed. Fielders in ball games will turn away from the ball and sprint to­wards where they think it will come down, if they can’t run fast enough while keep­ing it in sight, but they still have to look at the ball again to stand any chance of catch­ing it. The ini­tial sense data it­self doesn’t de­ter­mine the an­swer, how­ever well pro­cessed.

When what you need is a smaller prob­a­bil­ity cloud, calcu­lat­ing the same cloud more pre­cisely doesn’t help. Pre­ci­sion about your ig­no­rance is not knowl­edge.

• Num­bers are not needed for an­chor­ing. We could ar­range the prob­a­bil­ities of the truth of state­ments into par­tially or­dered sets. This po set can even in­clude state­ments about the prob­a­bil­is­tic re­la­tion be­tween state­ments.

Well, we should be care­ful to avoid the bar­bers para­dox though… things like x = {x is more likely then y} are a bad idea

I think it would be bet­ter to avoid just mak­ing up num­bers un­til we ab­solutely have to, we ac­tu­ally find our selves play­ing a lot­tery for the con­tinued ex­is­tence of Earth, or there is some nu­mer­i­cal pro­cess grounded in statis­tics that pro­vides the num­bers, rest­ing on some as­sump­tions. How­ever, by an­chor­ing prob­a­bil­ities in post sets we might get bounds on things for which we can not com­pute prob­a­bil­ities.

• It is an­chor­ing that is the biggest prob­lem.

• Un­known: God ex­ists is not well speci­fied. For some­thing like “Zeus Ex­ists” (not ex­actly that, some guy named Zeus does ex­ist, and in some quan­tum branch there’s prob­a­bly an AGI that cre­ates the world of Greek Myth in simu­la­tion) I would say that my con­fi­dence in its false­hood is greater than my con­fi­dence in the alleged prob­a­bil­ity of win­ning a lot­tery could be.

• Do you know what you get when you mix high en­ergy col­liders with Pro­fes­sor Otto Rossler?s charged micro black hole the­ory?

An­swer: a golf ball (in 50 months to 50 years...)

• Eliezer, you are think­ing of Utili­tar­ian (also be­gins with U, which may ex­plain the con­fu­sion.) See http://​​util­i­tar­ian-es­says.com/​​pas­cal.html

I’ll get back to the other things later (in­clud­ing the money pump.) Un­for­tu­nately I will be busy for a while.

• Un­known, de­scribe the money pump. Also, are you the guy who con­verted to Chris­ti­an­ity due to Pas­cal’s Wager or am I think­ing of some­one else?

The tug-of-war in “How ex­treme a low prob­a­bil­ity to as­sign?” is driven, on the one hand, by the need for our prob­a­bil­ities to sum to 1 - so if you as­sign prob­a­bil­ities >> 10^-6 to un­jus­tified state­ments of such com­plex­ity that more than a mil­lion of them could be pro­duced, you will be in­con­sis­tent and Dutch-book­able. On the other hand, it’s ex­tremely hard to be right about any­thing a mil­lion times in a row.

My in­stinct is to look for a de­on­tish hu­man strat­egy for han­dling this class of prob­lem, one that takes into ac­count both hu­man over­con­fi­dence and the de­sire-to-dis­miss, and also the temp­ta­tion for hu­mans to make up silly things with huge con­se­quences and claim “but you can’t know I’m wrong”.

• Eliezer, the cor­rect way to re­solve your in­con­sis­tency seems to be to be less ap­prov­ing of novel ex­per­i­ments, es­pe­cially when they aren’t yet nec­es­sary or prob­a­bly very use­ful, and when a bit later we will likely have more ex­per­tise with re­gard to them. I re­fer to a com­ment I just made in an­other thread.

• There’s more than just P != NP that defeats try­ing to catch a fly­ing ball by pre­dict­ing where it will land and go­ing there. Or, for that mat­ter, try­ing to go there by com­put­ing a se­ries of mus­cu­lar ac­tions and then do­ing them.
You DO re­al­ize that some hu­mans are perfectly ca­pa­ble of ac­com­plish­ing pre­cisely that ac­tion, right?

• If P != NP and the uni­verse has no source of ex­po­nen­tial com­put­ing power, then there are ev­i­den­tial up­dates too difficult for even a su­per­in­tel­li­gence to com­pute—even though the prob­a­bil­ities would be quite well-defined, if we could af­ford to calcu­late them.

...

Try­ing to catch a fly­ing ball, you’re prob­a­bly bet­ter off with your brain’s built-in mechanisms, then [than?] us­ing de­liber­a­tive ver­bal rea­son­ing to in­vent or ma­nipu­late prob­a­bil­ities.

There’s more than just P != NP that defeats try­ing to catch a fly­ing ball by pre­dict­ing where it will land and go­ing there. Or, for that mat­ter, try­ing to go there by com­put­ing a se­ries of mus­cu­lar ac­tions and then do­ing them. You can’t sense where the ball is or what your body is do­ing ac­cu­rately enough to plan, then ex­e­cute ac­tions with the pre­ci­sion re­quired. A prob­a­bil­ity cloud perfectly calcu­lated from all the available in­for­ma­tion isn’t good enough, if it’s big­ger than your hand.

This is how to catch a ball: move so as to keep its ap­par­ent di­rec­tion (both az­i­muth and ele­va­tion) con­stant.

But this doesn’t mean you’re go­ing be­yond prob­a­bil­ity the­ory or above prob­a­bil­ity the­ory.

It doesn’t mean you’re do­ing prob­a­bil­ity the­ory ei­ther, even when you re­li­ably win. The rule “move so as to keep the ap­par­ent di­rec­tion con­stant” says noth­ing about prob­a­bil­ities. If any­one wants to try at a prob­a­bil­ity-the­o­retic ac­count of its effec­tive­ness, I would be in­ter­ested, but scep­ti­cal in ad­vance.

• I would not be com­fortable with the in­con­sis­tency you de­scribe about the lot­tery. I’m not sure how you can let it stand. I guess the prob­lem is that you don’t know which in­stinct to fix, and just re­vers­ing one be­lief at ran­dom is not go­ing to im­prove ac­cu­racy on av­er­age.

Still, wouldn’t care­ful in­tro­spec­tion be likely to ex­pose ei­ther some more fun­da­men­tal set of in­con­sis­tent be­liefs, that you can fix; or at least, to lead you to de­cide that one of the two be­liefs is in fact stronger than the other, in which case you should re­verse the weaker one? It seems un­likely that the two be­liefs are ex­actly bal­anced in your de­gree of cre­dence.

For the re­ac­tor, I’d say that the rea­son­ing about one in a thou­sand odds is in fact a good way to go about an­a­lyz­ing the prob­lem. It’s how I ap­proach other, similar is­sues. If I’m con­sid­er­ing one of two routes through heavy traf­fic, I do roughly es­ti­mate the odds of run­ning into a traf­fic jam. Th­ese are very crude es­ti­mates but they are bet­ter than noth­ing.

The biggest crit­i­cism I would give to such rea­son­ing in this case is that as we go out the prob­a­bil­ity scale, we have much less ex­pe­rience, and our es­ti­mates are go­ing to be far less ac­cu­rate and cal­ibrated. Fur­ther­more, of­ten in these situ­a­tions we end up com­par­ing or di­vid­ing prob­a­bil­ities, and er­ror per­centages go up as­tro­nom­i­cally in such calcu­la­tions. So while the fi­nal figure may rep­re­sent a mean, the de­vi­a­tion is so large that even slight differ­ences in ap­proach could have led to a dra­mat­i­cally differ­ent an­swer.

I would give sub­stan­tially higher es­ti­mates that our the­o­ries are wrong—in­deed by some mea­sures, we know for sure our the­o­ries are wrong since they are in­con­sis­tent and none of the unifi­ca­tions work. How­ever I’d give much lower es­ti­mates that the the­o­ries are wrong in just such a way that would lead to us de­stroy­ing the earth.

I as­sume you were be­ing face­tious when you gave 75% odds that the au­thors would have main­tained their opinion in differ­ent cir­cum­stances. Yet to me, it is a use­ful figure to read, and does offer in­sight into how strongly you be­lieve. Without that num­ber, I’d have guessed that you felt more strongly than that.

• For ex­am­ple, I would be sub­stan­tially more alarmed about a lot­tery de­vice with a well-defined chance of 1 in 1,000,000 of de­stroy­ing the world, than I am about the Large Hadron Col­lider switched on. If I could pre­vent only one of these events, I would pre­vent the lot­tery.

On the other hand, if you asked me whether I could make one mil­lion state­ments of au­thor­ity equal to “The Large Hadron Col­lider will not de­stroy the world”, and be wrong, on av­er­age, around once, then I would have to say no.

Hmm… might this be the heuris­tic that makes peo­ple pre­fer a 1% chance of 1000 deaths to a definite death for 5? The lot­tery would defi­nately de­stroy wor­lds, with as many deaths as kil­ling over six thou­sand peo­ple in each Everett branch. Run­ning the LHC means a higher ex­pected num­ber of dead wor­lds by your own es­ti­mates, but it’s all or noth­ing across uni­verses. It will most prob­a­bly just be safe.

If you had a defi­nate num­ber for both P(Dooms­day Lot­tery De­vice Win) and P(Dooms­day LHC) you’d shut up and mul­ti­ply, but you haven’t so you don’t. But you still should be­cause you’re pretty sure P(D-LHC) >> P(DLDW) even if you don’t know a figure for P(DLHC).

This as­sumes Paul’s as­sump­tion, above.

• Calcu­lat­ing prob­a­bil­ities about nearly any real world event is ex­tremely com­plex. Some­one who ac­cepts the logic of your post shouldn’t be­lieve there is much value to Bayesian anal­y­sis other then al­low­ing you to de­ter­mine whether new in­for­ma­tion should cause you to in­crease or de­crease your es­ti­mate of the prob­a­bil­ity of some event oc­cur­ring.

It should be pos­si­ble for some­one to an­swer the fol­low­ing ques­tion: Is the prob­a­bil­ity of X oc­cur­ring greater or less than Y? And if you an­swer enough of these ques­tions you can ba­si­cally de­ter­mine the prob­a­bil­ity of X.

• This is mostly what economists re­fer to as the differ­ence be­tween im­plicit and ex­plicit knowl­edge. The differ­ence be­tween skills and ver­bal knowl­edge. I strongly recom­mend Thomas Sow­ell’s “Knowl­edge and De­ci­sions”.

• If I could pre­vent only one of these events, I would pre­vent the lot­tery.

I’m as­sum­ing that this is in a world where there are no pay­offs to the LHC; we could imag­ine a world in which it’s de­cided that switch­ing the LHC on is too risky, but be­fore it is moth­balled a group of rogue physi­cists try to do the riskiest ex­per­i­ment they can think of on it out of sheer en­nui.