Can we always assign, and make sense of, subjective probabilities?

Epistemic sta­tus: I wrote this post quickly, and largely to so­licit feed­back on the claims I make in it. This is be­cause (a) I’m not sure about these claims (or how I’ve ex­plained them), and (b) the ques­tion of what I should be­lieve on this topic seems im­por­tant in gen­eral and for var­i­ous other posts I’m writ­ing. (So please com­ment if you have any thoughts on this!)

I’ve now read a bunch on top­ics re­lated to the ques­tions cov­ered here, but I’m not an ex­pert, and haven’t seen or ex­plic­itly looked for a di­rect treat­ment of the ques­tions cov­ered here. It’s very pos­si­ble this has already been thor­oughly and clearly cov­ered el­se­where; if so, please com­ment the link!

I ba­si­cally ac­cept a Bayesian in­ter­pre­ta­tion of prob­a­bil­ity, “in which, in­stead of fre­quency or propen­sity of some phe­nomenon, prob­a­bil­ity is in­ter­preted as rea­son­able ex­pec­ta­tion rep­re­sent­ing a state of knowl­edge or as quan­tifi­ca­tion of a per­sonal be­lief” (Wikipe­dia). Re­lat­edly, I think I ac­cept the idea that we can always as­sign prob­a­bil­ities to propo­si­tions (or at least use some­thing like an un­in­for­ma­tive prior), and “make sense of” these prob­a­bil­ities, even if some­times we have in­cred­ibly lit­tle ba­sis for mak­ing those prob­a­bil­ity es­ti­mates.

This idea seems to be dis­puted fairly of­ten, and this seems re­lated to the con­cept of a dis­tinc­tion be­tween “risk” and “un­cer­tainty” (which I think is a con­fused con­cept). I think the ar­gu­ments against this idea are flawed. But I want to test my be­liefs and prop­erly en­gage with those ar­gu­ments. So in this post, I first dis­cuss how I be­lieve we can ar­rive at, and make sense of, prob­a­bil­ity es­ti­mates in what are some­times put for­ward as “challeng­ing cases”, be­fore dis­cussing what I think is prob­a­bly the most challeng­ing type of case: what I call “su­per­nat­u­ral-type” claims.

Weak ex­am­ples of “Knigh­tian un­cer­tainty”

Some­times peo­ple pro­pose what seem to me to be very weak ex­am­ples of cases in which, they pro­pose, we sim­ply can­not ar­rive at prob­a­bil­ity es­ti­mates. (This may be sim­ply a re­sult of them hav­ing a fre­quen­tist in­ter­pre­ta­tion of prob­a­bil­ity, but this of­ten doesn’t seem to be made ex­plicit or defended.) Here’s one ex­am­ple:

there are situ­a­tions with so many unique fea­tures that they can hardly be grouped with similar cases, such as the dan­ger re­sult­ing from a new type of virus, or the con­se­quences of mil­i­tary in­ter­ven­tion in con­flict ar­eas. Th­ese rep­re­sent cases of (Knigh­tian) un­cer­tainty where no data are available to es­ti­mate ob­jec­tive prob­a­bil­ities. While we may rely on our sub­jec­tive es­ti­mates un­der such con­di­tions, no ob­jec­tive ba­sis ex­ists by which to judge them (e.g., LeRoy & Sin­gell, 1987). (source)

It seems ob­vi­ous to me that a wealth of data is available for such cases. There have been many viruses and mil­i­tary in­ter­ven­tions be­fore. None of those situ­a­tions will perfectly mir­ror the situ­a­tions we’re try­ing to pre­dict, and that’s definitely a very im­por­tant point. We should there­fore think very care­fully about whether we’re be­ing too con­fi­dent in our pre­dic­tions (i.e., us­ing too nar­row a “con­fi­dence in­ter­val”[1] and thus not ad­e­quately prepar­ing for es­pe­cially “high” or “low” pos­si­bil­ities).

But we can clearly do bet­ter than noth­ing. To start small, you’d be com­fortable with the claim that a new type of virus, if it hits this year, is more likely to kill some­where be­tween 0 and 1 billion peo­ple than some­where be­tween 1000 and 1001 billion peo­ple (i.e., far more than ev­ery­one al­ive), right? And in fact, we have em­piri­cal ev­i­dence that some peo­ple can re­li­ably do bet­ter than chance (and bet­ter than “0 to 1 billion”) in mak­ing pre­dic­tions about geopoli­ti­cal events like these, at least over timelines of a few years (from Tet­lock’s work).

AGI

What about some­thing that seems more unique or un­prece­dented, and where we also may have to stretch our pre­dic­tions fur­ther into the fu­ture, like ar­tifi­cial gen­eral in­tel­li­gence (AGI) timelines? On that ques­tion, ex­perts dis­agree wildly, and are seem­ingly quite swayed by things like how the ques­tion is asked (Katja Grace on 80k; search for “It’s a bit com­pli­cated” in the tran­script). This makes me highly un­con­fi­dent in any pre­dic­tion I might make on the topic (and thus pushes me to­wards mak­ing de­ci­sions that are good given a wide range of pos­si­ble timelines).

But I be­lieve I know more than noth­ing. I be­lieve I can rea­son­ably as­sign some prob­a­bil­ity dis­tri­bu­tion (and then use some­thing like the me­dian or mean of that as if it were a point es­ti­mate, for cer­tain pur­poses). If that seems like raw hubris, do you think it’s worth ac­tu­ally be­hav­ing as if AGI is just as likely to be de­vel­oped 1 minute from now as some­where around 2 to 300 years from now? What about be­hav­ing as if it’s likely to oc­cur in some mil­len­nium 50 quin­til­lion years from now, and not in this mil­len­nium? So you’d at least be fairly happy bound­ing your prob­a­bil­ity dis­tri­bu­tion some­where in be­tween those points 1 minute from now and 50 quin­til­lion years from now, right?

One could say that all I’ve done there is ar­gue that some prob­a­bil­ities we could as­sign would seem es­pe­cially out­ra­geous, not that we re­ally can or should as­sign prob­a­bil­ities to this event. But if some prob­a­bil­ities are more rea­son­able than oth­ers (and it cer­tainly seems they are, though I can’t prove it), then we can do bet­ter by us­ing those prob­a­bil­ities than by us­ing some­thing like an un­in­for­ma­tive prior.[2] And as far as I’m aware, prin­ci­ples for de­ci­sion mak­ing with­out prob­a­bil­ities es­sen­tially col­lapse to act­ing as if us­ing an un­in­for­ma­tive prior or pre­dictably lead to seem­ingly ir­ra­tional and bad de­ci­sions (I’ll be post­ing about this soon).

And in any case, we do have rele­vant data for the AGI ques­tion, even if we’ve never de­vel­oped AGI it­self—we have data on AI de­vel­op­ment more broadly, de­vel­op­ment re­lated to com­put­ing/​IT/​robotics more broadly, pre­vi­ous trans­for­ma­tive tech­nolo­gies (e.g., elec­tric­ity), the cur­rent state of fund­ing for AI, cur­rent gov­ern­men­tal stances to­wards AI de­vel­op­ment, how fund­ing and gov­ern­men­tal stances have in­fluenced tech in the past, etc.

Su­per­nat­u­ral-type claims

But that leads me to what does seem like it could be a strong type of coun­terex­am­ple to the idea that we can always as­sign prob­a­bil­ities: claims of a “su­per­nat­u­ral”, “meta­phys­i­cal”, or “un­ob­serv­able” na­ture. Th­ese are very fuzzy and de­bat­able terms, but defin­ing them isn’t my main pur­pose here, so in­stead I’ll just jump into some ex­am­ples:

  1. What are the odds that “an all-pow­er­ful god” ex­ists?

  2. What are the odds that “ghosts” ex­ist?

  3. What are the odds that “magic” ex­ists?

  4. What are the odds that “non-nat­u­ral­is­tic moral re­al­ism” is cor­rect (or that “non-nat­u­ral ob­jec­tive moral facts” ex­ist)?[3]

To me, and pre­sum­ably most LessWrong read­ers, the most ob­vi­ous re­sponse to these ques­tions is to dis­solve them, or to at least try to pin the ques­tioner down on defi­ni­tions. And I do think that’s very rea­son­able. But in this post I want to put my (cur­rent) be­lief that “we can always as­sign prob­a­bil­ities to propo­si­tions (or at least use some­thing like an un­in­for­ma­tive prior)” to a par­tic­u­larly challeng­ing test, so from here on I’ll as­sume we’ve some­how ar­rived at a satis­fac­to­rily pre­cise un­der­stand­ing of what the ques­tion is ac­tu­ally meant to mean.

In that case, my in­tu­itions would sug­gest I should as­sign a very low prob­a­bil­ity to each of these propo­si­tions.[4] But what ba­sis would I have for that? More speci­fi­cally, what ba­sis would I have for any par­tic­u­lar prob­a­bil­ity (or prob­a­bil­ity dis­tri­bu­tion) I as­sign? And what would it even mean?

This is Chris Smith’s state­ment of this ap­par­ent is­sue, which was es­sen­tially what prompted this post:

Kyle is an athe­ist. When asked what odds he places on the pos­si­bil­ity that an all-pow­er­ful god ex­ists, he says “2%.”

[...] I don’t know what to make of [Kyle’s] prob­a­bil­ity es­ti­mate.

[Kyle] wouldn’t be able to draw on past ex­pe­riences with differ­ent re­al­ities (i.e., Kyle didn’t pre­vi­ously ex­pe­rience a bunch of re­al­ities and learn that some of them had all-pow­er­ful gods while oth­ers didn’t). If you push some­one like Kyle to ex­plain why they chose 2% rather than 4% or 0.5%, you al­most cer­tainly won’t get a clear ex­pla­na­tion.

If you gave the same “What prob­a­bil­ity do you place on the ex­is­tence of an all-pow­er­ful god?” ques­tion to a num­ber of self-pro­claimed athe­ists, you’d prob­a­bly get a wide range of an­swers.

I bet you’d find that some peo­ple would give an­swers like 10%, oth­ers 1%, and oth­ers 0.001%. While these prob­a­bil­ities can all be de­scribed as “low,” they differ by or­ders of mag­ni­tude. If prob­a­bil­ities like these are used alongside prob­a­bil­is­tic de­ci­sion mod­els, they could have ex­tremely differ­ent im­pli­ca­tions. Go­ing for­ward, I’m go­ing to call prob­a­bil­ity es­ti­mates like these “hazy prob­a­bil­ities.”

I can sym­pa­thise with Smith’s con­cerns, though I think ul­ti­mately we can make sense of Kyle’s prob­a­bil­ity es­ti­mate, and that Kyle can have at least some ground­ing for it. I’ll now try to ex­plain why I think that, partly to so­licit feed­back on whether this think­ing (and my ex­pla­na­tion of it) makes sense.

In the non-su­per­nat­u­ral cases men­tioned ear­lier, it seemed clear to me that we had rele­vant data and the­o­ries. We have data on pre­vi­ous viruses and mil­i­tary in­ter­ven­tions (albeit likely from differ­ent con­texts and cir­cum­stances), and some rele­vant the­o­ret­i­cal un­der­stand­ings (e.g., from biol­ogy and epi­demiol­ogy, in the virus case). We lack data on a pre­vi­ous com­pleted in­stance of AGI de­vel­op­ment, but we have data on cases we could ar­gue are some­what analo­gous (e.g., the in­dus­trial rev­olu­tion, de­vel­op­ment and roll-out of elec­tric­ity, de­vel­op­ment of the atomic bomb, de­vel­op­ment of the in­ter­net), and we have the­o­ret­i­cal un­der­stand­ings that can guide us in our refer­ence class fore­cast­ing.

But do we have any rele­vant data or the­o­ries for the su­per­nat­u­ral-type cases?

As­sum­ing that whether the claim is true can af­fect the world

Let’s first make the as­sump­tion (which I’ll re­verse later) that these propo­si­tions, if true, would at some point have at least some, the­o­ret­i­cally ob­serv­able con­se­quences. That is, we’ll first as­sume that we’re not deal­ing with an ut­terly un­ver­ifi­able, un­falsifi­able hy­poth­e­sis, the truth of which would have no im­pact on the world any­way (see also Carl Sa­gan’s dragon).[5] This seems to be the as­sump­tion Smith is mak­ing, as he writes “Kyle didn’t pre­vi­ously ex­pe­rience a bunch of re­al­ities and learn that some of them had all-pow­er­ful gods while oth­ers didn’t”, im­ply­ing that it would be the­o­ret­i­cally pos­si­ble to learn whether a given re­al­ity had an all-pow­er­ful god.

That as­sump­tion still leaves open the pos­si­bil­ity that, even if these propo­si­tions were true, it’d be ex­tremely un­likely we’d ob­serve any ev­i­dence of them at all. This clearly makes it harder to as­sign prob­a­bil­ities to these propo­si­tions that are likely to track re­al­ity well. But is it im­pos­si­ble to as­sign any prob­a­bil­ities, or to make sense of prob­a­bil­ities that we as­sign?

It seems to me (though I’m un­sure) that we could as­sign prob­a­bil­ities us­ing some­thing like the fol­low­ing pro­cess:

  1. Try to think of all (or some sam­ple of) the propo­si­tions that we know have ever been made that are similar to the propo­si­tion in ques­tion. This could mean some­thing like one or more of the fol­low­ing:

    • All claims of a re­li­gious na­ture.

    • All claims that many peo­ple would con­sider “su­per­nat­u­ral”.

    • All claims where no one re­ally had a par­tic­u­lar idea of what con­se­quences we should ex­pect to ob­serve if they were true rather than false. (E.g., ghosts, given that they’re of­ten in­ter­preted as be­ing meant to be in­visi­ble and in­cor­po­real.)

    • All claims that are be­lieved to roughly the same level by hu­man­ity as a whole or by some sub­pop­u­la­tion (e.g., sci­en­tists).

  2. Try to figure out how many of these propo­si­tions later turned out to be true.

    • This may re­quire de­bat­ing what counts as still be­ing the same propo­si­tion, if the propo­si­tion was origi­nally very vague. For ex­am­ple, does the abil­ity to keep ob­jects afloat us­ing mag­nets count as lev­i­ta­tion?

  3. Do some­thing along the lines of refer­ence class fore­cast­ing us­ing this “data”.

    • This’ll likely re­quire de­cid­ing whether cer­tain data points count as a rele­vant claim turn­ing out to not be true ver­sus just not yet turn­ing out to be true. This may look like in­side-view-style think­ing about roughly how likely we think it’d be that we’d have ob­served ev­i­dence for that claim by now if it is true.

    • We might do some­thing like giv­ing some data points more or less “weight” de­pend­ing on things like how similar they seem to the mat­ter at hand or how con­fi­dent we are in our as­sess­ment of whether that data point “turned out to be true” or not. (I haven’t thought through in de­tail pre­cisely how you’d do this. You might in­stead con­struct mul­ti­ple sep­a­rate refer­ence classes, and then com­bine these like in model com­bi­na­tion, giv­ing differ­ent weights to the differ­ent classes.)

  4. If this refer­ence class fore­cast­ing sug­gests odds of 0%, this seems too con­fi­dent; it seems that we should never use prob­a­bil­ities of 0 or 1. It seems that one op­tion for han­dling this would be Laplace’s solu­tion to the rule of suc­ces­sion.

    • For ex­am­ple, if we found that 18 out of 18 rele­vant claims for which we “have data” “turned out to be false”, our refer­ence class fore­cast might sug­gest there’s a 100% chance (be­cause 18/​18=1) that the claim un­der con­sid­er­a­tion will turn out to be false too. To avoid this ab­solute cer­tainty, we add 1 to the nu­mer­a­tor and 2 to the de­nom­i­na­tor (so we do 19/​20=0.95), and find that there seems to be a 95% chance the claim un­der con­sid­er­a­tion will turn out to be false too.

    • There may be al­ter­na­tive solu­tions too, such as let­ting the in­side view con­sid­er­a­tions in­tro­duced in the next step move one away from ab­solute cer­tainty.

  5. Con­struct an “in­side view” rele­vant to how likely the claim is to be true. This may in­volve con­sid­er­a­tions like:

    • Knowl­edge from other fields (e.g., physics), and think­ing about how con­sis­tent this claim is with that knowl­edge (and per­haps also about how well con­sis­tency with knowl­edge from other fields has pre­dicted truth in the past).

    • The ex­tent to which the claim vi­o­lates Oc­cam’s ra­zor, and how bad it is for a claim to do so (per­haps based on how well stick­ing to Oc­cam’s ra­zor has seemed to pre­dict the ac­cu­racy of claims in the past).

    • Ex­pla­na­tions for why the claim would be made and be­lieved as widely as it is even if it isn’t true. E.g., ex­pla­na­tions from the evolu­tion­ary psy­chol­ogy of re­li­gion, or ex­pla­na­tions based on memet­ics.

  6. Com­bine the refer­ence class fore­cast and the in­side view some­how. (Per­haps qual­i­ta­tively, or per­haps via ex­plicit model com­bi­na­tion.)

I don’t ex­pect that many peo­ple ac­tu­ally, ex­plic­itly use the above pro­cess (I per­son­ally haven’t). But I think it’d be pos­si­ble to do so. And if we want to know “what to make of” prob­a­bil­ity es­ti­mates for these sorts of claims, we could per­haps think of what we ac­tu­ally do, which is more im­plicit/​in­tu­itive, as “ap­prox­i­mat­ing” that ex­plicit pro­cess. (But that’s a some­what sep­a­rate and de­bat­able claim; my core claims are con­sis­tent with the idea that in prac­tice peo­ple are com­ing to their prob­a­bil­ity as­sign­ments quite ran­domly.)

Another, prob­a­bly more re­al­is­tic way peo­ple could ar­rive at prob­a­bil­ity es­ti­mates for these sorts of claims is the fol­low­ing:

  1. Do some very vague, very im­plicit ver­sion of the above.

    • E.g., just “think­ing about” how of­ten things “like this” have seemed true in the past (with­out ac­tu­ally count­ing up var­i­ous cases), and “think­ing about” how likely the claim seems to you, when you bear in mind things like physics and Oc­cam’s ra­zor.

  2. Then in­tro­spect on how likely this claim “feels” to you, and try to ar­rive at a num­ber to rep­re­sent that.

    • One method to do so is Hub­bard’s “equiv­a­lent bet test” (de­scribed here).

Many peo­ple may find that method quite sus­pi­cious. But there’s ev­i­dence that, at least in some do­mains, it’s pos­si­ble to be­come fairly “well cal­ibrated” (i.e., do bet­ter than chance at as­sign­ing prob­a­bil­ity es­ti­mates) fol­low­ing “cal­ibra­tion train­ing” (see here and here). Ideally, the per­son us­ing that method would have en­gaged in such cal­ibra­tion train­ing be­fore. If they have, they might add a third step, or add as part of step 2, an ad­just­ment to ac­count for them tend­ing to over- or un­der­es­ti­mate prob­a­bil­ities (ei­ther prob­a­bil­ities in gen­eral, or prob­a­bil­ities of roughly this kind).

I’m not aware of any ev­i­dence re­gard­ing whether peo­ple can be­come well-cal­ibrated for these “su­per­nat­u­ral-type claims”. And I be­lieve there’s some­what limited ev­i­dence on how well cal­ibra­tion train­ing gen­er­al­ises across do­mains. So I think there are ma­jor rea­sons for skep­ti­cism, which I’d trans­late into large con­fi­dence in­ter­vals on my prob­a­bil­ity dis­tri­bu­tions.

But I’m also not aware of any ex­tremely com­pel­ling ar­gu­ments or ev­i­dence in­di­cat­ing that peo­ple wouldn’t be able to be­come well-cal­ibrated for these sorts of claims, or that cal­ibra­tion train­ing wouldn’t gen­er­al­ise to do­mains like this. So for now, I think I’d say that we can make sense of prob­a­bil­ity es­ti­mates for claims like these, and that we should have at least a very weak ex­pec­ta­tion that meth­ods like the above will re­sult in bet­ter prob­a­bil­ity es­ti­mates than if we acted as though we knew noth­ing at all.

As­sum­ing that whether the claim is true can’t af­fect the world

I think the much trick­ier case is if we as­sume that the truth of these claims would never af­fect the (nat­u­ral/​phys­i­cal/​what­ever) world at all, and would thus never be ob­serv­able. I think the stan­dard ra­tio­nal­ist re­sponse to this pos­si­bil­ity is dis­mis­sive­ness, and the ar­gu­ment that, un­der those con­di­tions, whether or not these claims are true is an ut­terly mean­ingless and unim­por­tant ques­tion. The claims are empty, and not worth ar­gu­ing about.

I find this re­sponse very com­pel­ling, and it’s the one I’ve typ­i­cally gone with. I think that, if we can just show that prob­a­bil­ities can be mean­ingfully as­signed to all claims that could ever the­o­ret­i­cally af­fect the nat­u­ral world at all, that’s prob­a­bly good enough.

But what if, for the sake of the ar­gu­ment, we en­ter­tain the pos­si­bil­ity that some claims may never af­fect the nat­u­ral world, and yet still be im­por­tant? Me not dis­miss­ing that pos­si­bil­ity out­right and im­me­di­ately may an­noy some read­ers, and I can sym­pa­thise with that. But it seems to me at least in­ter­est­ing to think about. And here’s one case where that pos­si­bly ac­tu­ally does seem to me like it could be im­por­tant:

What if non-nat­u­ral­is­tic moral re­al­ism is “cor­rect”, and what that means is that “moral facts” will never af­fect the nat­u­ral world, and will thus never be ob­serv­able, even in prin­ci­ple—but our ac­tions are still some­how rele­vant to these moral facts? E.g., what if it could be the case that it’s “good” for us to do one thing rather than an­other, in some sense that we “re­ally should” care about, and yet “good­ness” it­self leaves no trace at all in the nat­u­ral world? (This could per­haps be some­thing like epiphe­nom­e­nal­ism, but here I’m go­ing quite a bit be­yond what I re­ally know.)

In this case, I think refer­ence fore­cast­ing is use­less, be­cause we’d never have any data on the truth or false­hood of any claims of the right type.

But at first glance, it still seems to me like we may be able to make some head­way us­ing in­side views, or some­thing like ar­riv­ing at a “feel­ing” about the like­li­hood and then quan­tify­ing this us­ing the equiv­a­lent bet test. I’m very un­sure about that, be­cause usu­ally those meth­ods should rely on at least some, some­what rele­vant data. But it seems like per­haps we can still use­fully use con­sid­er­a­tions like how of­ten Oc­cam’s ra­zor has worked well in the past.

And this also re­minds me of Scott Alexan­der’s post on build­ing in­tu­itions on non-em­piri­cal ar­gu­ments in sci­ence (ad­di­tional post on that here). It also seems rem­i­nis­cent of some of Eliezer Yud­kowsky’s writ­ing on the many-wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics, though I read those posts a lit­tle while ago and didn’t have this idea in mind at the time.[6]

Clos­ing remarks

This quick post has be­come longer than planned, so I’ll stop there. The ba­sic sum­mary is that I ten­ta­tively claim we can always as­sign mean­ingful prob­a­bil­ities, even to su­per­nat­u­ral-type (or even ac­tu­ally su­per­nat­u­ral) claims. I’m not claiming we should be con­fi­dent in these prob­a­bil­ities, and in fact, I ex­pect many peo­ple should mas­sively re­duce their con­fi­dence in their prob­a­bil­ity es­ti­mates. I’m also not claiming that the prob­a­bil­ities peo­ple ac­tu­ally as­sign are re­li­ably bet­ter than chance—that’s an em­piri­cal ques­tion, and again there’d likely be is­sues of over­con­fi­dence.

As I said at the start, a ma­jor aim of this post is to get feed­back on my think­ing. So please let me know what you think in the com­ments.


  1. See this short­form post of mine for other ways of de­scribing the idea that our prob­a­bil­ities might be rel­a­tively “un­trust­wor­thy”. ↩︎

  2. I think that my “1 minute” ex­am­ple doesn’t demon­strate the su­pe­ri­or­ity of cer­tain prob­a­bil­ity dis­tri­bu­tions to an un­in­for­ma­tive prior. This is be­cause we could ar­gue that the is­sue there is that “1 minute from now” is far more pre­cise than “2 to 300 years from now”, and an un­in­for­ma­tive prior would favour the less pre­cise pre­dic­tion, just as we’d like it too. But I think my other ex­am­ple does in­di­cate, if our in­tu­itions on that are trust­wor­thy, that some prob­a­bil­ity dis­tri­bu­tions can be su­pe­rior to an un­in­for­ma­tive prior. This is be­cause, in that ex­am­ple, pre­dic­tions men­tioned spanned the same amount of time (a mil­len­nium), just start­ing at differ­ent points (~now vs ~50 quin­til­lion years from now). ↩︎

  3. Th­ese terms can be defined in many differ­ent ways. Foot­note 15 of this is prob­a­bly a good quick source. This page is also rele­vant, but I’ve only skimmed it my­self. ↩︎

  4. Though in the case of non-nat­u­ral­is­tic moral re­al­ism, I might still act as though it’s cor­rect, to a sub­stan­tial ex­tent, based on a sort of ex­pected value rea­son­ing or Pas­cal’s wa­ger. But I’m not sure if that makes sense, and it’s not di­rectly rele­vant for the pur­poses of this post. (I hope to write a sep­a­rate post about that idea later.) ↩︎

  5. I ac­knowl­edge that this may mean that these claims aren’t “ac­tu­ally su­per­nat­u­ral”, but they still seem like more-challeng­ing-than-usual cases for the idea that we can always as­sign mean­ingful prob­a­bil­ities. ↩︎

  6. To be clear, I’m not nec­es­sar­ily claiming that Alexan­der or Yud­kowsky would ap­prove of us­ing this sort of logic for top­ics like non-nat­u­ral­is­tic moral re­al­ism or the ex­is­tence of a god, rather than just dis­miss­ing those ques­tions out­right as mean­ingless or ut­terly in­con­se­quen­tial. I’m just draw­ing what seems to me, from mem­ory, some po­ten­tial con­nec­tions. ↩︎