Mistakes with Conservation of Expected Evidence

Epistemic Sta­tus: I’ve re­ally spent some time wrestling with this one. I am highly con­fi­dent in most of what I say. How­ever, this differs from sec­tion to sec­tion. I’ll put more spe­cific epistemic sta­tuses at the end of each sec­tion.

Some of this post is gen­er­ated from mis­takes I’ve seen peo­ple make (or, heard peo­ple com­plain about) in ap­ply­ing con­ser­va­tion-of-ex­pected-ev­i­dence or re­lated ideas. Other parts of this post are based on mis­takes I made my­self. I think that I used a wrong ver­sion of con­ser­va­tion-of-ex­pected-ev­i­dence for some time, and prop­a­gated some wrong con­clu­sions fairly deeply; so, this post is partly an at­tempt to work out the right con­clu­sions for my­self, and partly a warn­ing to those who might make the same mis­takes.

All of the mis­takes I’ll ar­gue against have some good in­sight be­hind them. They may be some­thing which is usu­ally true, or some­thing which points in the di­rec­tion of a real phe­nomenon while mak­ing an er­ror. I may come off as nit­pick­ing.

1. “You can’t pre­dict that you’ll up­date in a par­tic­u­lar di­rec­tion.”

Start­ing with an easy one.

It can be tempt­ing to sim­plify con­ser­va­tion of ex­pected ev­i­dence to say you can’t pre­dict the di­rec­tion which your be­liefs will change. This is of­ten ap­prox­i­mately true, and it’s ex­actly true in sym­met­ric cases where your start­ing be­lief is 50-50 and the ev­i­dence is equally likely to point in ei­ther di­rec­tion.

To see why it is wrong in gen­eral, con­sider an ex­treme case: a uni­ver­sal law, which you mostly already be­lieve to be true. At any time, you could see a coun­terex­am­ple, which would make you jump to com­plete dis­be­lief. That’s a small prob­a­bil­ity of a very large up­date down­wards. Con­ser­va­tion of ex­pected ev­i­dence im­plies that you must move your be­lief up­wards when you don’t see such a coun­terex­am­ple. But, you con­sider that case to be quite likely. So, con­sid­er­ing only which di­rec­tion your be­liefs will change, you can be fairly con­fi­dent that your be­lief in the uni­ver­sal law will in­crease—in fact, as con­fi­dent as you are in the uni­ver­sal law it­self.

The crit­i­cal point here is di­rec­tion vs mag­ni­tude. Con­ser­va­tion of ex­pected ev­i­dence takes mag­ni­tude as well as di­rec­tion into ac­count. The small but very prob­a­ble in­crease is bal­anced by the large but very im­prob­a­ble de­crease.

The fact that we’re talk­ing about uni­ver­sal laws and coun­terex­am­ples may fool you into think­ing about log­i­cal un­cer­tainty. You can think about log­i­cal un­cer­tainty if you want, but this phe­nomenon is pre­sent in the fully clas­si­cal Bayesian set­ting; there’s no funny busi­ness with non-Bayesian up­dates here.

Epistemic sta­tus: con­fi­dence at the level of math­e­mat­i­cal rea­son­ing.

2. “Yes re­quires the pos­si­bil­ity of no.”

Scott’s re­cent post, yes re­quires the pos­si­bil­ity of no, is fine. I’m refer­ring to a pos­si­ble mis­take which one could make in ap­ply­ing the prin­ci­ple illus­trated there.

“Those who dream do not know they dream, but when you are awake, you know you are awake.”—Eliezer, Against Modest Epistemology

Some­times, look around, and ask my­self whether I’m in a dream. When this hap­pens, I gen­er­ally con­clude very con­fi­dently that I’m awake.

I am not similarly ca­pa­ble of de­ter­min­ing that I’m dream­ing. My dream­ing self doesn’t have the self-aware­ness to ques­tion whether he is dream­ing in this way.

(Ac­tu­ally, very oc­ca­sion­ally, I do. I ei­ther end up forc­ing my­self awake, or I be­come lu­cid in the dream. Let’s ig­nore that pos­si­bil­ity for the pur­pose of the thought ex­per­i­ment.)

I am not claiming that my dream­ing self is never de­luded into think­ing he is awake. On the con­trary, I have those re­peat­edly-wak­ing-up-only-to-find-I’m-still-dream­ing dreams oc­ca­sion­ally. What I’m say­ing is that I am not able to perform the ac­tu­ally good test, where I look around and re­ally con­sciously con­sider whether or not I might be dream­ing. If I want to know if I’m awake, I can just check.

A “yes-re­quires-the-pos­si­bil­ity-of-no” mind­set might con­clude that my “ac­tu­ally good test” is no good at all, be­cause it can’t say no. I con­clude the ex­act op­po­site: my test seems re­ally quite effec­tive, be­cause I only suc­cess­fully com­plete it while awake.

Some­times, your thought pro­cesses re­ally are quite sus­pect; yet, there’s a san­ity check you can run which tells you the truth. If you’re de­lud­ing your­self, the gen­eral cat­e­gory of “things which you think are sim­ple san­ity checks you can run” is not trust­wor­thy. If you’re de­lud­ing your­self, you’re not even go­ing to think about the real san­ity checks. But, that does not in it­self de­tract from the effec­tive­ness of the san­ity check.

The gen­eral moral in terms of con­ser­va­tion of ex­pected ev­i­dence is: “‘Yes’ only re­quires the pos­si­bil­ity of silence.”. In many cases, you can mean­ingfully say yes with­out be­ing able to mean­ingfully say no. For ex­am­ple, the ax­ioms of set the­ory could prove their own in­con­sis­tency. They could not prove them­selves con­sis­tent (with­out also prov­ing them­selves in­con­sis­tent). This does not de­tract from the effec­tive­ness of a proof of in­con­sis­tency! Again, al­though the ex­am­ple in­volves logic, there’s noth­ing funny go­ing on with log­i­cal un­cer­tainty; the phe­nomenon un­der dis­cus­sion is un­der­stand­able in fully Bayesian terms.

Sym­bol­i­cally: as is always the case, you don’t re­ally want to up­date on the raw propo­si­tion, but rather, the fact that you ob­served the propo­si­tion, to ac­count for se­lec­tion bias. Con­ser­va­tion of ex­pected ev­i­dence can be writ­ten , but if we re-write it to ex­plic­itly show the “ob­ser­va­tion of ev­i­dence”, it be­comes . It does not be­come . In English: ev­i­dence is bal­anced be­tween mak­ing the ob­ser­va­tion and not mak­ing the ob­ser­va­tion, not be­tween the ob­ser­va­tion and the ob­ser­va­tion of the nega­tion.

Epistemic sta­tus: con­fi­dence at the level of math­e­mat­i­cal rea­son­ing for the core claim of this sec­tion. How­ever, some ap­pli­ca­tions of the idea (such as to dreams, my cen­tral ex­am­ple) de­pend on trick­ier philo­soph­i­cal is­sues dis­cussed in the next sec­tion. I’m only mod­er­ately con­fi­dent I have the right view there.

3. “But then what do you say to the Repub­li­can?”

I sus­pect that many read­ers are less than fully on board with the claims I made in the pre­vi­ous sec­tion. Per­haps you think I’m grossly over­con­fi­dent about be­ing awake. Per­haps you think I’m ne­glect­ing the out­side view, or ig­nor­ing some­thing to do with time­less de­ci­sion the­ory.

A lot of my think­ing in this post was gen­er­ated by grap­pling with some points made in Inad­e­quate Equil­ibria. To quote the rele­vant para­graph of against mod­est episte­mol­ogy:

Or as some­one ad­vo­cat­ing what I took to be mod­esty re­cently said to me, af­ter I ex­plained why I thought it was some­times okay to give your­self the dis­cre­tion to dis­agree with main­stream ex­per­tise when the main­stream seems to be screw­ing up, in ex­actly the fol­low­ing words: “But then what do you say to the Repub­li­can?”

Let’s put that in (pseudo-)con­ser­va­tion-of-ex­pected-ev­i­dence terms: we know that just ap­ply­ing one’s best rea­son­ing will of­ten leave one over­con­fi­dent in one’s idiosyn­cratic be­liefs. Doesn’t that mean “ap­ply your best rea­son­ing” is a bad test, which fails to con­serve ex­pected ev­i­dence? So, should we not ad­just down­ward in gen­eral?

In the es­say, Eliezer strongly ad­vises al­low­ing your­self to have an in­side view even when there’s an out­side view which says in­side views broadly similar to yours tend to be mis­taken. But doesn’t that go against what he said in Eth­i­cal In­junc­tions?

Eth­i­cal In­junc­tions ar­gues that there are situ­a­tions where you should not trust your rea­son­ing, and fall back on a gen­eral rule. You do this be­cause, in the vast ma­jor­ity of cases of that kind, your oh-so-clever rea­son­ing is mis­taken and the gen­eral rule saves you from the er­ror.

In Against Modest Episte­mol­ogy, Eliezer crit­i­cizes ar­gu­ments which rely on putting ar­gu­ments in very gen­eral cat­e­gories and tak­ing the out­side view:

At its episte­molog­i­cal core, mod­esty says that we should ab­stract up to a par­tic­u­lar very gen­eral self-ob­ser­va­tion, con­di­tion on it, and then not con­di­tion on any­thing else be­cause that would be in­side-view­ing. An ob­ser­va­tion like, “I’m fa­mil­iar with the cog­ni­tive sci­ence liter­a­ture dis­cussing which de­bi­as­ing tech­niques work well in prac­tice, I’ve spent time on cal­ibra­tion and vi­su­al­iza­tion ex­er­cises to ad­dress bi­ases like base rate ne­glect, and my ex­pe­rience sug­gests that they’ve helped,” is to be gen­er­al­ized up to, “I use an episte­mol­ogy which I think is good.” I am then to ask my­self what av­er­age perfor­mance I would ex­pect from an agent, con­di­tion­ing only on the fact that the agent is us­ing an episte­mol­ogy that they think is good, and not con­di­tion­ing on that agent us­ing Bayesian episte­mol­ogy or de­bi­as­ing tech­niques or ex­per­i­men­tal pro­to­col or math­e­mat­i­cal rea­son­ing or any­thing in par­tic­u­lar.
Only in this way can we force Repub­li­cans to agree with us… or some­thing.

He in­stead ad­vises that we should up­date on all the in­for­ma­tion we have, use our best ar­gu­ments, rea­son about situ­a­tions in full de­tail:

If you’re try­ing to es­ti­mate the ac­cu­racy of your episte­mol­ogy, and you know what Bayes’s Rule is, then—on naive, straight­for­ward, tra­di­tional Bayesian episte­mol­ogy—you ought to con­di­tion on both of these facts, and es­ti­mate P(ac­cu­racy|know_Bayes) in­stead of P(ac­cu­racy). Do­ing any­thing other than that opens the door to a host of para­doxes.

In Eth­i­cal In­junc­tions, he seems to warn against that very thing:

But surely… if one is aware of these rea­sons… then one can sim­ply redo the calcu­la­tion, tak­ing them into ac­count. So we can rob banks if it seems like the right thing to do af­ter tak­ing into ac­count the prob­lem of cor­rupted hard­ware and black swan blowups. That’s the ra­tio­nal course, right?
There’s a num­ber of replies I could give to that.
I’ll start by say­ing that this is a prime ex­am­ple of the sort of think­ing I have in mind, when I warn as­piring ra­tio­nal­ists to be­ware of clev­er­ness.

Now, maybe Eliezer has sim­ply changed views on this over the years. Even so, that leaves us with the prob­lem of how to rec­on­cile these ar­gu­ments.

I’d say the fol­low­ing: mod­est episte­mol­ogy points out a sim­ple im­prove­ment over the de­fault strat­egy: “In any group of peo­ple who dis­agree, they can do bet­ter by mov­ing their be­liefs to­ward each other.” “Lots of crazy peo­ple think they’ve dis­cov­ered se­crets of the uni­verse, and the num­ber of sane peo­ple who truly dis­cover such se­crets is quite small; so, we can im­prove the av­er­age by never be­liev­ing we’ve dis­cov­ered se­crets of the uni­verse.” If we take a time­less de­ci­sion the­ory per­spec­tive (or similar), this is in fact an im­prove­ment; how­ever, it is far from the op­ti­mal policy, and has a form which blocks fur­ther progress.

Eth­i­cal In­junc­tions talks about rules with greater speci­fic­ity, and less progress-block­ing na­ture. Essen­tially, a proper eth­i­cal in­junc­tion is ac­tu­ally the best policy you can come up with, whereas the mod­esty ar­gu­ment stops short of that.

Doesn’t the “ac­tu­ally best policy you can come up with” risk overly-clever poli­cies which de­pend on bro­ken parts of your cog­ni­tion? Yes, but your meta-level ar­gu­ments about which kinds of ar­gu­ment work should be in­de­pen­dent sources of ev­i­dence from your ob­ject-level con­fu­sion. To give a toy ex­am­ple: let’s say you re­ally, re­ally want 8+8 to be 12 due to some mo­ti­vated cog­ni­tion. You can still de­cide to check by ap­ply­ing ba­sic ar­ith­metic. You might not do this, be­cause you know it isn’t to the ad­van­tage of the mo­ti­vated cog­ni­tion. How­ever, if you do check, it is ac­tu­ally quite difficult for the mo­ti­vated cog­ni­tion to warp ba­sic ar­ith­metic.

There’s also the fact that choos­ing a mod­esty policy doesn’t re­ally help the re­pub­li­can. I think that’s the crit­i­cal kink in the con­ser­va­tion-of-ex­pected-ev­i­dence ver­sion of mod­est episte­mol­ogy. If you, while awake, de­cide to doubt whether you’re awake (no mat­ter how com­pel­ling the ev­i­dence that you’re awake seems to be), then you’re not re­ally im­prov­ing your over­all cor­rect­ness.

So, all told, it seems like con­ser­va­tion of ex­pected ev­i­dence has to be ap­plied to the de­tails of your rea­son­ing. If your put your rea­son­ing in a more generic cat­e­gory, it may ap­pear that a much more mod­est con­clu­sion is re­quired by con­ser­va­tion of ex­pected ev­i­dence. We can jus­tify this in clas­si­cal prob­a­bil­ity the­ory, though in this sec­tion it is even more tempt­ing to con­sider ex­otic de­ci­sion-the­o­retic and non-om­ne­science con­sid­er­a­tions than it was pre­vi­ously.

Epistemic sta­tus: the con­clu­sion is math­e­mat­i­cally true in clas­si­cal Bayesian episte­mol­ogy. I am sub­jec­tively >80% con­fi­dent that the con­clu­sion should hold in >90% of re­al­is­tic cases, but it is un­clear how to make this into a real em­piri­cal claim. I’m un­sure enough of how eth­i­cal in­junc­tions should work that I could see my views shift­ing sig­nifi­cantly. I’ll men­tion pre-ra­tio­nal­ity as one con­fu­sion I have which seems vaguely re­lated.

4. “I can’t cred­ibly claim any­thing if there are in­cen­tives on my words.”

Another rule which one might de­rive from Scott’s Yes Re­quires the Pos­si­bil­ity of No is: you can’t re­ally say any­thing if pres­sure is be­ing put on you to say a par­tic­u­lar thing.

Now, I agree that this is some­what true, par­tic­u­larly in sim­ple cases where pres­sure is be­ing put on you to say one par­tic­u­lar thing. How­ever, I’ve suffered from learned hel­pless­ness around this. I sort of shut down when I can iden­tify any in­cen­tives at all which could make my claims sus­pect, and hes­i­tate to claim any­thing. This isn’t a very use­ful strat­egy. Either “just say the truth”, or “just say what­ever you feel you’re ex­pected to say” are both likely bet­ter strate­gies.

One idea is to “call out” the pres­sure you feel. “I’m hav­ing trou­ble say­ing any­thing be­cause I’m wor­ried what you will think of me.” This isn’t always a good idea, but it can of­ten work fairly well. Some­one who is cav­ing to in­cen­tives isn’t very likely to say some­thing like that, so it pro­vides some ev­i­dence that you’re be­ing gen­uine. It can also open the door to other ways you and the per­son you’re talk­ing to can solve the in­cen­tive prob­lem.

You can also “call out” some­thing even if you’re un­able or un­will­ing to ex­plain. You just say some­thing like “there’s some thing go­ing on”… or “I’m some­how frus­trated with this situ­a­tion”… or what­ever you can man­age to say.

This “call out” idea also works (to some ex­tent) on mo­ti­vated cog­ni­tion. Maybe you’re wor­ried about the so­cial pres­sure on your be­liefs be­cause it might in­fluence the ac­cu­racy of those be­liefs. Rather than stress­ing about this and go­ing into a spiral of self-anal­y­sis, you can just state to your­self that that’s a thing which might be go­ing on, and move for­ward. Mak­ing it ex­plicit might open up helpful lines of think­ing later.

Another thing I want to point out is that most peo­ple are will­ing to place at least a lit­tle faith in your hon­esty (and not ir­ra­tionally so). Just be­cause you have a story in mind where they should as­sume you’re ly­ing doesn’t mean that’s the only pos­si­bil­ity they are—or should be—con­sid­er­ing. One prob­le­matic in­cen­tive doesn’t fully de­ter­mine the situ­a­tion. (This one also ap­plies in­ter­nally: iden­ti­fy­ing one rele­vant bias or what­ever doesn’t mean you should block off that part of your­self.)

Epistemic sta­tus: low con­fi­dence. I imag­ine I would have said some­thing very differ­ent if I were more an ex­pert in this par­tic­u­lar thing.

5. “Your true rea­son screens off any other ev­i­dence your ar­gu­ment might in­clude.”

In The Bot­tom Line, Eliezer de­scribes a clever ar­guer who first writes the con­clu­sion which they want to ar­gue for at the bot­tom of a sheet of pa­per, and then comes up with as many ar­gu­ments as they can to put above that. In the thought ex­per­i­ment, the clever ar­guer’s con­clu­sion is ac­tu­ally de­ter­mined by who can pay the clever ar­guer more. Eliezer says:

So the hand­writ­ing of the cu­ri­ous in­quirer is en­tan­gled with the signs and por­tents and the con­tents of the boxes, whereas the hand­writ­ing of the clever ar­guer is ev­i­dence only of which owner paid the higher bid. There is a great differ­ence in the in­di­ca­tions of ink, though one who fool­ishly read aloud the ink-shapes might think the English words sounded similar.

Now, Eliezer is try­ing to make a point about how you form your own be­liefs—that the qual­ity of the pro­cess which de­ter­mines which claims you make is what mat­ters, and the qual­ity of any ra­tio­nal­iza­tions you give doesn’t change that.

How­ever, read­ing that, I came away with the mis­taken idea that some­one listen­ing to a clever ar­guer should ig­nore all the clever ar­gu­ments. Or, gen­er­al­iz­ing fur­ther, what you should do when listen­ing to any ar­gu­ment is try to figure out what pro­cess wrote the bot­tom line, ig­nor­ing any other ev­i­dence pro­vided.

This isn’t the worst pos­si­ble al­gorithm. You re­ally should heav­ily dis­count ev­i­dence pro­vided by clever ar­guers, be­cause it has been heav­ily cherry-picked. And al­most ev­ery­one does a great deal of clever ar­gu­ing. Even a hard­boiled ra­tio­nal­ist will tend to pre­sent ev­i­dence for the point they’re try­ing to make rather than against (per­haps be­cause that’s a fairly good strat­egy for ex­plain­ing things—sam­pling ev­i­dence at ran­dom isn’t a very effi­cient way of con­vers­ing!).

How­ever, ig­nor­ing ar­gu­ments and at­tend­ing only to the origi­nal causes of be­lief has some ab­surd con­se­quences. Chief among them is: it would im­ply that you should ig­nore math­e­mat­i­cal proofs if the per­son who came up with the proof only searched for pos­i­tive proofs and wouldn’t have spend time try­ing to prove the op­po­site. (This ties in with the very first sec­tion—failing to find a proof is like re­main­ing silent.)

This is bonkers. Proof is proof. And again, this isn’t some spe­cial non-Bayesian phe­nomenon due to log­i­cal un­cer­tainty. A Bayesian can and should rec­og­nize de­ci­sive ev­i­dence, whether or not it came from a clever ar­guer.

Yet, I re­ally held this po­si­tion for a while. I treated math­e­mat­i­cal proofs as an ex­cep­tional case, rather than as a phe­nomenon con­tin­u­ous with weaker forms of ev­i­dence. If a clever ar­guer pre­sented any­thing short of a math­e­mat­i­cal proof, I would re­mind my­self of how con­vinc­ing cherry-picked ev­i­dence can seem. And I’d no­tice how al­most ev­ery­one mostly cherry-picked when ex­plain­ing their views.

This is like throw­ing out data when it has been con­tam­i­nated by se­lec­tion bias, rather than mak­ing a model of the se­lec­tion bias so that you can up­date on the data ap­pro­pri­ately. It might be a good prac­tice in sci­en­tific pub­li­ca­tions, but if you take it as a uni­ver­sal, you could find rea­sons to throw out just about ev­ery­thing (es­pe­cially if you start wor­ry­ing about an­thropic se­lec­tion effects).

The right thing to do is closer to this: figure out how con­vinc­ing you ex­pect ev­i­dence to look given the ex­tent of se­lec­tion bias. Then, up­date on the differ­ence be­tween what you see and what ex­pected. If a clever ar­guer makes a case which is much bet­ter than what you would have ex­pected they could make, you can up­date up. If it is worse that you’d ex­pect, even if the ev­i­dence would oth­er­wise look fa­vor­able, you up­date down.

My view also made me un­com­fortable pre­sent­ing a case for my own be­liefs, be­cause I would think of my­self as a clever-ar­guer any time I did some­thing other than re­count the ac­tual his­tor­i­cal causes of my be­lief (or hon­estly re­con­sider my be­lief on the spot). Grog­nor made a similar point in Un­will­ing or Un­able to Ex­plain:

Let me back up. Speak­ing in good faith en­tails giv­ing the real rea­sons you be­lieve some­thing rather than a per­sua­sive im­promptu ra­tio­nal­iza­tion. Most peo­ple rou­tinely do the lat­ter with­out even notic­ing. I’m sure I still do it with­out notic­ing. But when I do no­tice I’m about to make some­thing up, in­stead I clam up and say, “I can’t ex­plain the rea­sons for this claim.” I’m not will­ing to dis­in­gen­u­ously refer­ence a sci­en­tific pa­per that I’d never even heard of when I formed the be­lief it’d be jus­tify­ing, for ex­am­ple. In this case silence is the only fea­si­ble al­ter­na­tive to speak­ing in bad faith.

While I think there’s some­thing to this mind­set, I no longer think it makes sense to clam up when you can’t figure out how you origi­nally came around to the view which you now hold. If you think there are other good rea­sons, you can give them with­out vi­o­lat­ing good faith.

Ac­tu­ally, I re­ally wish I could draw a sharper line here. I’m es­sen­tially claiming that a lit­tle cherry-pick­ing is OK if you’re just try­ing to con­vince some­one of the view which you see as the truth, so long as you’re not in­ten­tion­ally hid­ing any­thing. This is an un­com­fortable con­clu­sion.

Epistemic sta­tus: con­fi­dent that the views I claim are mis­taken are mis­taken. Less con­fi­dent about best-prac­tice claims.

6. “If you can’t provide me with a rea­son, I have to as­sume you’re wrong.”

If you take the con­clu­sion of the pre­vi­ous sec­tion too far, you might rea­son as fol­lows: if some­one is try­ing to claim X, surely they’re try­ing to give you some ev­i­dence to­ward X. If they claim X and then you challenge them for ev­i­dence, they’ll try to tell you any ev­i­dence they have. So, if they come up with noth­ing, you have to up­date down, since you would have up­dated up­wards oth­er­wise. Right?

I think most peo­ple make this mis­take due to sim­ple con­ver­sa­tion norms: when nav­i­gat­ing a con­ver­sa­tion, peo­ple have to figure out what ev­ery­one else is will­ing to as­sume, in or­der to make sen­si­ble state­ments with min­i­mal fric­tion. So, we look for ob­vi­ous signs of whether a state­ment was ac­cepted by ev­ery­one vs re­jected. If some­one was asked to provide a rea­son for a state­ment they made and failed to do so, that’s a fairly good sig­nal that the state­ment hasn’t been ac­cepted into the com­mon back­ground as­sump­tions for the con­ver­sa­tion. The fact that other peo­ple are likely to use this heuris­tic as well makes the sig­nal even stronger. So, as­ser­tions which can’t be backed up with rea­sons are likely to be re­jected.

This is al­most the op­po­site mis­take from the pre­vi­ous sec­tion; the pre­vi­ous one was jus­tifi­ca­tions don’t mat­ter, whereas this idea is only jus­tifi­ca­tions mat­ter.

I think some­thing good hap­pens when ev­ery­one in a con­ver­sa­tion rec­og­nizes that peo­ple can be­lieve things for good rea­son with­out be­ing able to ar­tic­u­late those rea­sons. (This in­cludes your­self!)

You can’t just give ev­ery­one a pass to make un­jus­tified claims and as­sert that they have strong inar­tic­u­la­ble rea­sons. Or rather, you can give ev­ery­one a pass to do that, but you don’t have to take them se­ri­ously when they do it. How­ever, in en­vi­ron­ments of high in­tel­lec­tual trust, you can take it se­ri­ously. In­deed, ap­ply­ing the usual heuris­tic will likely cause you to up­date in the wrong di­rec­tion.

Epistemic sta­tus: mod­er­ately con­fi­dent.


I think all of this is fairly im­por­tant—if you’re like me, you’ve likely made some mis­takes along these lines. I also think there are many is­sues re­lated to con­ser­va­tion of ex­pected ev­i­dence which I still don’t fully un­der­stand, such as ex­pla­na­tion vs ra­tio­nal­iza­tion, eth­i­cal in­junc­tions and pre-ra­tio­nal­ity. Tsuyoku Nar­i­tai!