# nshepperd

Karma: 1,621
NewTop
Page 1
• Proof of #4, but with un­nec­es­sary calcu­lus:

Not only is there an odd num­ber of tri­color tri­an­gles, but they come in pairs ac­cord­ing to their ori­en­ta­tion (RGB clock­wise/​an­ti­clock­wise). Proof: define a con­tin­u­ously differ­en­tiable vec­tor field on the plane, by let­ting the field at each ver­tex be 0, and the field in the cen­ter of each edge be a vec­tor of mag­ni­tude 1 point­ing in the di­rec­tion R->G->B->R (or 0 if the two ad­ja­cent ver­tices are the same color). Ex­tend the field to the com­plete edges, then the in­te­ri­ors of the tri­an­gles by some in­ter­po­la­tion method with con­tin­u­ous deriva­tive (eg. co­sine in­ter­po­la­tion).

As­sume the line in­te­gral along one unit edge in the di­rec­tion R->G or G->B or B->R to be 13. (Without loss of gen­er­al­ity since we can rescale the graph/​vec­tors to make this true). Then a similar par­ity ar­gu­ment to Sperner’s 1d lemma (or the FTC) shows that the clock­wise line in­te­gral along each large edge is 13, hence the line in­te­gral around the large tri­an­gle is 1/​3+1/​3+1/​3=1.

By green’s the­o­rem, this is equal to the in­te­grated curl of the field in the in­te­rior of the large tri­an­gle, and hence equal (by an­other in­vo­ca­tion of green’s the­o­rem) to the summed clock­wise line in­te­grals around each small tri­an­gle. The in­te­grals around a uni­color or bi­color tri­an­gle are 0 and −1/​3 + 13 + 0 = 0 re­spec­tively, leav­ing only tri­color tri­an­gles, whose in­te­gral is again 1 de­pend­ing on ori­en­ta­tion. Thus: (tri­color clock­wise) - (tri­color an­ti­clock­wise) = 1. QED.

• We got to dis­cussing this on #less­wrong re­cently. I don’t see any­one here point­ing this out yet di­rectly, so:

Can you tech­ni­cally Strong Upvote ev­ery­thing? Well, we can’t stop you. But we’re hop­ing a com­bi­na­tion of mostly-good-faith + triv­ial in­con­ve­niences will re­sult in peo­ple us­ing Strong Upvotes when they feel it’s ac­tu­ally im­por­tant.

This ap­proach, hop­ing that good faith will pre­vent peo­ple from us­ing Strong votes “too much”, is a good ex­am­ple of an Ass­hole Filter (linkposted on LW last year). You’ve set some (un­clear) bound­aries, then due to not en­forc­ing them, re­ward those who vi­o­late them with in­creased con­trol over the site con­ver­sa­tion. Chris_Leong ges­tures to­wards this with­out di­rectly nam­ing it in a sibling com­ment.

In my opinion “maybe put limits on strong up­votes if this seems to be a prob­lem” is not the cor­rect re­sponse to this prob­lem, nor would be ban­ning or oth­er­wise ‘dis­ci­plin­ing’ users who use strong votes “too much”. The cor­rect re­sponse is to re­move the ass­hole filter by al­ter­ing the in­cen­tives to match what you want to hap­pen. Op­tions in­clude:

1. Mak­ing votes nor­mal by de­fault but en­courag­ing users to use strong votes freely, up to 100% of the time, so that good faith users are not dis­ad­van­taged. (Note: still dis­en­fran­chises users who don’t no­tice that this fea­ture ex­ists, but maybe that’s ok.)

2. Mak­ing votes strong by de­fault so that it’s mak­ing a “weak” vote that takes ex­tra effort. (Note: this gives users who care­fully make weak votes when they have weak opinions less weight, but at least they do this with eyes open and in the ab­sence of per­verse in­cen­tives.)

3. #2 but with some al­gorith­mic ad­just­ment to give care­ful users more weight in­stead of less. This seems ex­tremely difficult to get right (cf. slash­dot meta­mod­er­a­tion). Prob­a­bly the cor­rect an­swer there is some form of col­lab­o­ra­tive fil­ter­ing.

Per­son­ally I favour solu­tion #1.

I’ll add that this is not just a hy­po­thet­i­cal troll-con­trol is­sue. This is also a UX is­sue. Forc­ing users to nav­i­gate an un­clear eth­i­cal ques­tion and pris­oner’s dilemma—how much strong vot­ing is “too much”—in or­der to use the site is un­pleas­ant and a bad user ex­pe­rience. There should not be a “wrong” ac­tion available in the user in­ter­face.

PS. I’ll con­cede that mak­ing strong votes an ac­tu­ally limited re­source that is en­forced by the site eco­nom­i­cally (eg. with To­ken Bucket quota) would in a way also work, due to elimi­nat­ing the per­ceived need for strong votes to be limited by “good faith”. But IMO the need is only per­ceived, and not real. Vot­ing is for ex­press­ing prefer­ences, and prefer­ences are un­limited.

• Good post!

Is it com­mon to use Kal­man filters for things that have non­lin­ear trans­for­ma­tions, by ap­prox­i­mat­ing the pos­te­rior with a Gaus­sian (eg. calcu­lat­ing the clos­est Gaus­sian dis­tri­bu­tion to the true pos­te­rior by JS-di­ver­gence or the like)? How well would that work?

Gram­mar com­ment—you seem to have ac­ci­den­tally a few words at

Mea­sur­ing mul­ti­ple quan­tities: what if we want to mea­sure two or more quan­tities, such as tem­per­a­ture and hu­midity? Fur­ther­more, we might know that these are [miss­ing words?] Then we now have mul­ti­vari­ate nor­mal dis­tri­bu­tions.

• How big was your mir­ror, and how much of your face did you see in it?

• C is ba­si­cally a state­ment that, if in­cluded in a valid ar­gu­ment about the truth of P, causes the ar­gu­ment to tell us ei­ther P or ~P. That’s defi­ni­tion­ally what it means to be able to know the crite­rion of truth.

That’s not how al­gorithms work and seems… in­co­her­ent.

That you want to deny C is great,

I did not say that ei­ther.

be­cause I think (as I’m find­ing with Said), that we already agree, and any dis­agree­ment is the con­se­quence of mi­s­un­der­stand­ing, prob­a­bly be­cause it comes too close to sound­ing to you like a po­si­tion that I would also re­ject, and the rest of the fun­da­men­tal dis­agree­ment is one of sen­ti­ment, per­spec­tive, hav­ing worked out the de­tails, and em­pha­sis.

No, I don’t think we do agree. It seems to me you’re deeply con­fused about all of this stuff.

Here’s an ex­er­cise: Say that we re­place “C” by a spe­cific con­crete al­gorithm. For in­stance the el­e­men­tary long mul­ti­pli­ca­tion al­gorithm used by pri­mary school chil­dren to mul­ti­ply num­bers.

Does any­thing what­so­ever about your ar­gu­ment change with this sub­sti­tu­tion? Have we proved that we can ex­plain mul­ti­pli­ca­tion to a rock? Or per­haps we’ve proved that this al­gorithm doesn’t ex­ist, and nei­ther do schools?

Another ex­er­cise: sup­pose, as a coun­ter­fac­tual, that Laplace’s de­mon ex­ists, and fur­ther­more likes an­swer­ing ques­tions. Now we can take a spe­cific al­gorithm C: “ask the de­mon your ques­tion, and await the an­swer, which will be re­ceived within the minute”. By con­struc­tion this al­gorithm always re­turns the cor­rect an­swer. Now, your task is to give the al­gorithm, given only these premises, that I can fol­low to con­vince a rock that Eu­clid’s the­o­rem is true.

• It seems that you don’t get it. Said just demon­strated that even if C ex­ists it wouldn’t im­ply a uni­ver­sally com­pel­ling ar­gu­ment.

In other words, this:

Sup­pose we know the crite­rion of truth, C; that is, there ex­ists (not coun­ter­fac­tu­ally but ac­tu­ally as in any­one can ob­serve this thing) a pro­ce­dure/​​al­gorithm to as­sess if any given state­ment is true. Let P be a state­ment. Then there ex­ists some ar­gu­ment, A, con­tin­gent on C such that A im­plies P or ~P. Thus for all P we can know if P or ~P. This would make A uni­ver­sally com­pel­ling, i.e. A is a mind-in­de­pen­dent ar­gu­ment for the truth value of all state­ments that would con­vince even rocks.

ap­pears to be a to­tal non se­quitur. How does the ex­is­tence of an al­gorithm en­able you to con­vince a rock of any­thing? At a min­i­mum, an al­gorithm needs to be im­ple­mented on a com­puter… Your state­ment, and there­fore your con­clu­sion that C doesn’t ex­ist, doesn’t fol­low at all.

(Note: In this com­ment, I am not claiming that C (as you’ve defined it) ex­ists, or agree­ing that it needs to ex­ist for any of my crit­i­cisms to hold.)

• It doesn’t seem to be a straw­man of what eg. gwor­ley and TAG have been say­ing, judg­ing by the re­peated de­mands for me to sup­ply some uni­ver­sally com­pel­ling “crite­rion of truth” be­fore any of the stan­dard crit­i­cisms can be ap­plied. Maybe you ac­tu­ally dis­agree with them on this point?

It doesn’t seem like ap­ply­ing full force in crit­i­cism is a pri­or­ity for the ‘pos­tra­tional­ity’ en­vi­sioned by the OP, ei­ther, or else they would not have given ex­am­ples (com­pel­ling­ness-of-story, will­ing­ness-to-life) so triv­ial to show as bad ideas us­ing stan­dard ar­gu­ments.

• As for my story about how the brain works: yes, it is ob­vi­ously a vast sim­plifi­ca­tion. That does not make it false, es­pe­cially given that “the brain learns to use what has worked be­fore and what it thinks is likely to make it win in the fu­ture” is ex­actly what Eliezer is ad­vo­cat­ing in the above post.

Even if true, this is differ­ent from “epistemic ra­tio­nal­ity is just in­stru­men­tal ra­tio­nal­ity”; as differ­ent as adap­ta­tion ex­ecu­tors are from fit­ness max­imisers.

Separately, it’s in­ter­est­ing that you quote this part:

The im­por­tant thing is to hold noth­ing back in your crit­i­cisms of how to crit­i­cize; nor should you re­gard the un­avoid­abil­ity of loopy jus­tifi­ca­tions as a war­rant of im­mu­nity from ques­tion­ing.

Be­cause it seems to me that this is ex­actly what ad­vo­cates of “pos­tra­tional­ity” here are not do­ing, when they take the ab­sence of uni­ver­sally com­pel­ling ar­gu­ments as li­cense to dis­miss ra­tio­nal­ity and truth-based ar­gu­ments against their po­si­tions.¹

Eliezer also says this:

Always ap­ply full force, whether it loops or not—do the best you can pos­si­bly do, whether it loops or not—and play, ul­ti­mately, to win.

It seems to me that ap­ply­ing full force in crit­i­cism of pos­tra­tional­ity amounts to some­thing like the be­low:

“In­deed, com­pel­ling­ness-of-story, will­ing­ness-to-life, mythic mode, and many other non-ev­i­dence-based crite­ria are al­ter­na­tive crite­ria which could be used to se­lect be­liefs. How­ever we have huge amounts of ev­i­dence (cat­a­logued in the Se­quences, and in the heuris­tics and bi­ases liter­a­ture) that these crite­ria are not strongly cor­re­lated to truth, and there­fore will lead you to hold­ing wrong be­liefs, and fur­ther­more that hold­ing wrong be­liefs is in­stru­men­tally harm­ful, and, and [the rest of the se­quences, Eth­i­cal In­junc­tions, etc]...”

“Mean­while, we also have vast tracts of ev­i­dence that sci­ence works, that re­sults de­rived with valid statis­ti­cal meth­ods repli­cate far more of­ten than any oth­ers, that be­liefs ap­proach­ing truth re­quires ac­cu­mu­lat­ing ev­i­dence by ob­ser­va­tion. I would put the prob­a­bil­ity that ra­tio­nal meth­ods are the best crite­ria I have for se­lect­ing be­liefs at . Hence, it seems de­ci­sively not worth it to adopt some al­most cer­tainly harm­ful ‘pos­tra­tional’ anti-epis­to­mol­ogy just be­cause of that prob­a­bil­ity. In any case, per Eth­i­cal In­junc­tions, even if my prob­a­bil­ities were oth­er­wise, it would be far more likely that I’ve made a mis­take in rea­son­ing than that adopt­ing non-ra­tio­nal be­liefs by such meth­ods would be a good idea.”

In­deed, much of the Se­quences could be seen as Eliezer con­sid­er­ing al­ter­na­tive ways of se­lect­ing be­liefs or “view­ing the world”, an­a­lyz­ing these al­ter­na­tive ways, and show­ing that they are con­trary to and in­fe­rior to ra­tio­nal­ity. Once this has been demon­strated, we call them “bi­ases”. We don’t cling to them on the ba­sis that “we can’t know the crite­rion of truth”.

Ad­vo­cates of pos­tra­tional­ity seem to be hop­ing that the fact that P(Oc­cam’s ra­zor) < makes these ar­gu­ments go away. It doesn’t work like that. P(Oc­cam’s ra­zor) = at most makes of these ar­gu­ments go away. And we have a lot of ev­i­dence for Oc­cam’s ra­zor.

¹ As gwor­ley seems to do here and here seem­ingly ex­pect­ing me to provide a uni­ver­sally com­pel­ling ar­gu­ment in re­sponse.

• I’ll have more to say later but:

The way that I’d phrase it is that there’s a differ­ence be­tween con­sid­er­ing a claim to be true, and con­sid­er­ing its jus­tifi­ca­tion uni­ver­sally com­pel­ling.

Both of these are differ­ent from the claim ac­tu­ally be­ing true. The fact that Oc­cam’s ra­zor is true is what causes the phys­i­cal pro­cess of (oc­camian) ob­ser­va­tion and ex­per­i­ment to yield cor­rect re­sults. So you see, you’ve already man­aged to rephrase what I’ve been say­ing into some­thing differ­ent by con­flat­ing map and ter­ri­tory.

• This stuff about rain danc­ing seems like just the most ba­nal episte­molog­i­cal triv­ial­ities, which have already been dealt with thor­oughly in the Se­quences. The rea­sons why such “tests” of rain danc­ing don’t work are well known and don’t need to be re­ca­pitu­lated here.

But to do that, you need to use a meta-model. When I say that we don’t have di­rect ac­cess to the truth, this is what I mean;

This has noth­ing to do with causal path­ways, magic or oth­er­wise, di­rect or oth­er­wise. Magic would not turn a rock into a philoso­pher even if it should ex­ist.

Yes, car­ry­ing out ex­per­i­ments to de­ter­mine re­al­ity re­lies on Oc­cam’s ra­zor. It re­lies on Oc­cam’s ra­zor be­ing true. It does not in any way rely on me pos­sess­ing some mag­i­cal uni­ver­sally com­pel­ling ar­gu­ment for Oc­cam’s ra­zor. Be­cause Oc­cam’s ra­zor is in fact true in our uni­verse, ex­per­i­ment does in fact work, and thus the causal path­way for eval­u­at­ing our mod­els does in fact ex­ist: ex­per­i­ment and ob­ser­va­tion (and bayesian statis­tics).

I’m go­ing to stress this point be­cause I no­ticed oth­ers in this thread make this seem­ingly el­e­men­tary map-ter­ri­tory con­fu­sion be­fore (though I didn’t com­ment on it there). In fact it seems to me now that con­flat­ing these things is maybe ac­tu­ally the en­tire source of this de­bate: “Oc­cam’s ra­zor is true” is an en­tirely differ­ent thing from “I have ac­cess to uni­ver­sally com­pel­ling ar­gu­ments for Oc­cam’s ra­zor”, as differ­ent as a raven and the ab­stract con­cept of cor­po­rate debt. The former is true and use­ful and rele­vant to episte­mol­ogy. The lat­ter is false, im­pos­si­ble and use­less.

Be­cause the former is true, when I say “in fact, there is a causal path­way to eval­u­ate our mod­els: look­ing at re­al­ity and do­ing ex­per­i­ments”, what I say is, in fact, true. The pro­cess in fact works. It can even be car­ried out by a suit­ably pro­grammed robot with no aware­ness of what Oc­cam’s ra­zor or “truth” even is. No ap­peals or ar­gu­ments about whether uni­ver­sally com­pel­ling ar­gu­ments for Oc­cam’s ra­zor ex­ist can change that fact.

(Why am I so lucky as to be a mind whose think­ing re­lies on Oc­cam’s ra­zor in a world where Oc­cam’s ra­zor is true? Well, an­i­mals evolved via nat­u­ral se­lec­tion in an Oc­camian world, and those whose minds were more fit for that world sur­vived...)

But hon­estly, I’m just re­gur­gi­tat­ing Where Re­cur­sive Jus­tifi­ca­tion Hits Bot­tom at this point.

This is a re­in­force­ment learn­ing sys­tem which re­sponds to re­wards: if par­tic­u­lar thoughts or as­sump­tions (...) have led to ac­tions which brought the or­ganism (in­ter­nally or ex­ter­nally gen­er­ated re­wards), then those kinds of thoughts and as­sump­tions will be re­in­forced.

This seems like a gross over­sim­plifi­ca­tion to me. The mind is a com­plex dy­nam­i­cal sys­tem made of lo­cally re­in­force­ment-learn­ing com­po­nents, which doesn’t do any one thing all the time.

In other words, we end up hav­ing the kinds of be­liefs that seem use­ful, as eval­u­ated by whether they suc­ceed in giv­ing us re­wards. Epistemic and in­stru­men­tal ra­tio­nal­ity were the same all along.

And this seems sim­ply wrong. You might as well say “epistemic ra­tio­nal­ity and chem­i­cal ac­tion-po­ten­tials were the same all along”. Or “jumbo jets and sheets of alu­minium were the same all along”. A jumbo jet might even be made out of sheets of alu­minium, but a ran­domly cho­sen pile of the lat­ter sure isn’t go­ing to fly.

As for your ex­am­ples, I don’t have any­thing to add to Said’s ob­ser­va­tions.

• In­deed, the sci­en­tific his­tory of how ob­ser­va­tion and ex­per­i­ment led to a cor­rect un­der­stand­ing of the phe­nomenon of rain­bows is long and fas­ci­nat­ing.

• I’m sorry, what? In this dis­cus­sion? That seems like an egre­gious con­flict of in­ter­est. You don’t get to unilat­er­ally de­cide that my com­ments are made in bad faith based on your own in­ter­pre­ta­tion of them. I saw which com­ment of mine you deleted and hon­estly I’m baf­fled by that de­ci­sion.

• If I may sum­ma­rize what I think the key dis­agree­ment is, you think we can know truth well enough to avoid the prob­lem of the crite­rion and gain noth­ing from ad­dress­ing it.

and to be pointed about it I think be­liev­ing you can iden­tify the crite­rion of truth is a “com­fort­ing” be­lief that is ei­ther con­tra­dic­tory or de­mands adopt­ing non-tran­scen­den­tal idealism

Ac­tu­ally… I was go­ing to edit my com­ment to add that I’m not sure that I would agree that I “think we can know truth well enough to avoid the prob­lem of the crite­rion” ei­ther, since your con­cep­tion of this no­tion seems to in­trin­si­cally re­quire some kind of magic, lead­ing me to be­lieve that you some­how mean some­thing differ­ent by this than I would. But I didn’t get around to it in time! No mat­ter.

• If I may sum­ma­rize what I think the key dis­agree­ment is, you think we can know truth well enough to avoid the prob­lem of the crite­rion and gain noth­ing from ad­dress­ing it.

That’s not my only dis­agree­ment. I also think that your spe­cific pro­posed solu­tion does noth­ing to “ad­dress” the prob­lem (in par­tic­u­lar be­cause it just seems like a bad idea, in gen­eral be­cause “ad­dress­ing” it to your satis­fac­tion is im­pos­si­ble), and only serves as an ex­cuse to ra­tio­nal­ize hold­ing com­fort­ing but wrong be­liefs un­der the guise of do­ing “ad­vanced philos­o­phy”. This is why the “pow­er­ful but dan­ger­ous tool” rhetoric is wrong­headed. It’s not a pow­er­ful tool. It doesn’t grant any abil­ity to step out­side your own head that you didn’t have be­fore. It’s just a trap.

• I don’t have to solve the prob­lem of in­duc­tion to look out my win­dow and see whether it is rain­ing. I don’t need 100% cer­tainty, a four-nines prob­a­bil­ity es­ti­mate is just fine for me.

Where’s the “just go to the win­dow and look” in judg­ing be­liefs ac­cord­ing to “com­pel­ling­ness-of-story”?

• Of course not, and that’s the point.

The point… is that judg­ing be­liefs ac­cord­ing to whether they achieve some goal or any­thing—is no more re­li­able than judg­ing be­liefs ac­cord­ing to whether they are true, is in no way a solu­tion to the prob­lem of in­duc­tion or even a sen­si­ble re­sponse to it, and most likely only makes your episte­mol­ogy worse?

In­deed, which is why meta­ra­tional­ity must not for­get to also in­clude all of ra­tio­nal­ity within it!

Can you ex­plain this in a way that doesn’t make it sound like an empty ap­plause light? How can I take com­pel­ling­ness-of-story into ac­count in my prob­a­bil­ity es­ti­mates with­out vi­o­lat­ing the Kol­mogorov ax­ioms?

To say a lit­tle more on dan­ger, I mean dan­ger­ous to the pur­pose of fulfilling your own de­sires.

Yes, that’s ex­actly the dan­ger.

Un­like poli­tics, which is an ob­ject-level dan­ger you are point­ing to, pos­tra­tional­ity is a met­alevel dan­ger, but speci­fi­cally be­cause it’s a more pow­er­ful set of tools rather than a shiny thing peo­ple like to fight over. This is like the differ­ence be­tween be­ing weary of gen­er­ally un­safe con­di­tions that can­not be used and dan­ger­ous tools that are only dan­ger­ous if used by the un­skil­led.

Think­ing you’re skil­led enough to use some “pow­er­ful but dan­ger­ous” tool is ex­actly the prob­lem. You will never be skil­led enough to de­liber­ately adopt false be­liefs with­out suffer­ing the con­se­quences.

But surely… if one is aware of these rea­sons… then one can sim­ply redo the calcu­la­tion, tak­ing them into ac­count. So we can rob banks if it seems like the right thing to do af­ter tak­ing into ac­count the prob­lem of cor­rupted hard­ware and black swan blowups. That’s the ra­tio­nal course, right?

There’s a num­ber of replies I could give to that.

I’ll start by say­ing that this is a prime ex­am­ple of the sort of think­ing I have in mind, when I warn as­piring ra­tio­nal­ists to be­ware of clev­er­ness.

• Be­cause there’s no causal path­way through which we could di­rectly eval­u­ate whether or not our brains are ac­tu­ally track­ing re­al­ity.

I don’t know what “di­rectly” means, but there cer­tainly is a causal path­way, and we can cer­tainly eval­u­ate whether our brains are track­ing re­al­ity. Just make a pre­dic­tion, then go out­side and look with your eyes to see if it comes true.

Schizophren­ics also think that they have causal ac­cess to the truth as granted by their senses, and might main­tain that be­lief un­til their death.

So much the worse for schizophren­ics. And so?

“Well we can’t go be­low 20%, but we can in­fluence what that 20% con­sists of, so let’s swap that de­sire to be­lieve our­selves to be bet­ter than any­one else into some de­sire that makes us hap­pier and is less likely to cause need­less con­flict. Also, by learn­ing to ma­nipu­late the con­tents of that 20%, we be­come bet­ter ca­pa­ble at notic­ing when a be­lief comes from the 20% rather than the 80%, and ad­just­ing ac­cord­ingly”.

I have a hard time be­liev­ing that this sort of clever rea­son­ing will lead to any­thing other than mak­ing your be­liefs less ac­cu­rate and merely in­creas­ing the num­ber of non-truth-based be­liefs above 20%.

The only sen­si­ble re­sponse to the prob­lem of in­duc­tion is to do our best to track the truth any­way. Every­body who comes up with some clever rea­son to avoid do­ing this thinks they’ve found some mag­i­cal short­cut, some pow­er­ful yet-undis­cov­ered tool (dan­ger­ous in the wrong hands, of course, but a ra­tio­nal per­son can surely use it safely...). Then they cut them­selves on it.

• Two points:

1. Ad­vanc­ing the con­ver­sa­tion is not the only rea­son I would write such a thing, but ac­tu­ally it serves a differ­ent pur­pose: pro­tect­ing other read­ers of this site from form­ing a false be­lief that there’s some kind of con­sen­sus here that this philos­o­phy is not poi­sonous and harm­ful. Now the reader is aware that there is at least de­bate on the topic.

2. It doesn’t prove the OP’s point at all. The OP was about be­liefs (and “mak­ing sense of the world”). But I can have the be­lief “pos­tra­tional­ity is poi­sonous and harm­ful” with­out hav­ing to post a com­ment say­ing so, there­fore whether such a com­ment would ad­vance the con­ver­sa­tion need not en­ter into form­ing that be­lief, and is in fact en­tirely ir­rele­vant.

• Well, this is a long com­ment, but this seems to be the most im­por­tant bit:

The gen­eral point here is that the hu­man brain does not have magic ac­cess to the crite­ria of truth; it only has ac­cess to its own mod­els.

Why would you think “magic ac­cess” is re­quired? It seems to me the or­di­nary non-magic causal ac­cess granted by our senses works just fine.

All that you say about be­liefs of­ten be­ing crit­i­cally mis­taken due to eg. emo­tional at­tach­ment, is of course true, and that is why we must be ruth­less in re­ject­ing any rea­sons for be­liev­ing things other than truth—and if we find that a be­lief is with­out rea­sons af­ter that, we should dis­card it. The prob­lem is this seems to be ex­actly the op­po­site of what “pos­tra­tional­ity” ad­vo­cates: us­ing the lack of “magic ac­cess” to the truth as an ex­cuse to em­brace non-truth-based rea­sons for be­liev­ing things.