Epistemic sta­tus: Hor­ta­tive. I’m try­ing to ar­gue for carv­ing re­al­ity at a new joint.

I think it’s lovely and use­ful that we have la­bels, not just for ra­tio­nal­ist, but for ra­tio­nal­ist-ad­ja­cent and for post-ra­tio­nal­ist. But these la­bels are gen­er­ally made ex­ten­sion­ally, by point­ing at peo­ple who claim those la­bels, rather than in­ten­sion­ally, by try­ing to dis­till what dis­t­in­guishes those clusters.

I have some in­ten­sional defi­ni­tions that I’ve been hon­ing for a long time. Here’s the biggest one.

# A ra­tio­nal­ist, in the sense of this par­tic­u­lar com­mu­nity, is some­one who is try­ing to build and up­date a unified prob­a­bil­is­tic model of how the en­tire world works, and try­ing to use that model to make pre­dic­tions and de­ci­sions.

By “unified” I mean de­com­part­men­tal­ized- if there’s a do­main where the model gives two in­com­pat­i­ble pre­dic­tions, then as soon as that’s no­ticed it has to be rec­tified in some way.

And it’s im­por­tant that it be prob­a­bil­is­tic- it’s perfectly con­sis­tent to re­solve a con­flict be­tween pre­dic­tions by say­ing “I cur­rently think the an­swer is X with about 60% prob­a­bil­ity, and Y with about 25% prob­a­bil­ity, and with about 15% prob­a­bil­ity I’m miss­ing the cor­rect op­tion or con­fused about the na­ture of the ques­tion en­tirely”.

The Se­quences are aimed at peo­ple try­ing to do ex­actly this thing, and Eliezer fo­cuses on how to not go hor­ribly wrong in the pro­cess (with a spe­cial fo­cus on not trust­ing one’s own sense of ob­vi­ous­ness).

Be­ing a ra­tio­nal­ist isn’t about any spe­cific set of con­clu­sions- it’s not about be­ing an effec­tive al­tru­ist, or a util­i­tar­ian, or even an athe­ist. It’s about whether one is try­ing to do that thing or not. Even if one is do­ing a ter­rible job of it!

Truth-seek­ing is a pre­req­ui­site, but it’s not enough. It’s pos­si­ble to be very dis­ci­plined about find­ing and as­sem­bling true facts, with­out thereby chang­ing the way one thinks about the world. As a con­trast, here’s how the New York Times, whose fact-check­ing qual­ity is not in dis­pute, de­cides what to re­port:

By and large, tal­ented re­porters scram­bled to match sto­ries with what in­ter­nally was of­ten called “the nar­ra­tive.” We were oc­ca­sion­ally asked to map a nar­ra­tive for our var­i­ous beats a year in ad­vance, square the plan with ed­i­tors, then gen­er­ate sto­ries that fit the pre-des­ig­nated line.

The differ­ence be­tween wield­ing a nar­ra­tive and fit­ting new facts into it, and learn­ing a model from new facts, is the differ­ence be­tween ra­tio­nal­iza­tion and ra­tio­nal­ity.

“Tak­ing weird ideas se­ri­ously” is also a pre­req­ui­site (be­cause some weird ideas are true, and if you bounce off of them you won’t get far), but again it’s not enough. I shouldn’t re­ally need to con­vince you of that one.

Okay, then, so what’s a post-ra­tio­nal­ist?

The peo­ple who iden­tify as such gen­er­ally don’t want to pin it down, but here’s my at­tempt at cat­e­go­riz­ing at least the ones who make sense to me:

# A post-ra­tio­nal­ist is some­one who be­lieves the ra­tio­nal­ist pro­ject is mis­guided or im­pos­si­ble, but who likes to use some of the tools and con­cepts de­vel­oped by the ra­tio­nal­ists.

Of course I’m less con­fi­dent that this prop­erly defines the cluster, out­side of groups like Rib­bon­farm where it seems to fit quite well. There are peo­ple who view the Se­quences (or what­ever parts have diffused to them) the way they view Der­rida: as one more tool to try on an in­ter­est­ing co­nun­drum, see if it works there, but not re­ally treat it as ap­pli­ca­ble across the board.

And there are those who talk about be­ing a fox rather than a hedge­hog (and there­fore see try­ing to rec­on­cile one’s mod­els across do­mains as be­ing harm­ful), and those who talk about how the very at­tempt is a mat­ter of hubris, that not only can we not know the uni­verse, we can­not even re­al­is­ti­cally as­pire to de­cent cal­ibra­tion.

And then, of course:

# A ra­tio­nal­ist-ad­ja­cent is some­one who en­joys spend­ing time with some clusters of ra­tio­nal­ists (and/​or en­joys dis­cussing some top­ics with ra­tio­nal­ists), but who is not in­ter­ested in do­ing the whole ra­tio­nal­ist thing them­self.

Which is not a bad thing at all! It’s hon­estly a good sign of a healthy com­mu­nity that the com­mu­nity ap­peals even to peo­ple for whom the pro­ject doesn’t ap­peal, and the ra­tio­nal­ist-ad­ja­cents may be more psy­cholog­i­cally healthy than the ra­tio­nal­ists.

The real is­sue of con­tention, as far as I’m con­cerned, is some­thing I’ve saved for the end: that not ev­ery­one who self-iden­ti­fies as a ra­tio­nal­ist fits the first defi­ni­tion very well, and that the first defi­ni­tion is in fact a more com­pact cluster than self-iden­ti­fi­ca­tion.

And that makes this com­mu­nity, and this site, a bit tricky to nav­i­gate. There are ra­tio­nal­ist-ad­ja­cents for whom a dou­ble-crux on many top­ics would fail be­cause they’re not in­ter­ested in zoom­ing in so close on a be­lief. There are post-ra­tio­nal­ists for whom a dou­ble-crux would fail be­cause they can just switch frames on the con­ver­sa­tion any time they’re feel­ing stuck. And to try to dou­ble-crux with some­one, only to have it fail in ei­ther of those ways, is an in­furi­at­ing feel­ing for those of us who thought we could take it for granted in the com­mu­nity.

I don’t yet know of an in­ter­ven­tion for sig­nal­ing that a con­ver­sa­tion is hap­pen­ing on ex­plic­itly ra­tio­nal­ist norms- it’s hard to do that in a way that oth­ers won’t feel pres­sured to in­sist they’d fol­low. But I wish there were one.

• I want to speci­fi­cally ob­ject to the last part of the post (the rest of it is fine and I agree al­most com­pletely with both the ex­plicit pos­i­tive claims and the im­plied nor­ma­tive ones).

But at the end, you talk about dou­ble-crux, and say:

And to try to dou­ble-crux with some­one, only to have it fail in ei­ther of those ways, is an in­furi­at­ing feel­ing for those of us who thought we could take it for granted in the com­mu­nity.

Well, and why did you think you could take it for granted in the com­mu­nity? I don’t think that’s jus­tified at all—post-ra­tio­nal­ists and ra­tio­nal­ist-ad­ja­cents aside!

For in­stance, while I don’t like to la­bel my­self as any kind of ‘-ist’—even a ‘ra­tio­nal­ist’—the term ap­plies to me, I think, bet­ter than it does to most peo­ple. (This is by no means a claim of any ex­traor­di­nary ra­tio­nal­ist ac­com­plish­ments, please note; in fact, if pressed for a la­bel, I’d have to say that I pre­fer the old one ‘as­piring ra­tio­nal­ist’… but then, your given defi­ni­tion—and I agree with it—re­quires no par­tic­u­lar ac­com­plish­ment, only a per­spec­tive and an at­tempt to progress to­ward a cer­tain goal. Th­ese things, I think, I can hon­estly claim.) Cer­tainly you’ll find me to be among the first to ar­gue for the philos­o­phy laid out in the Se­quences, and against any ‘post-ra­tio­nal­ism’ or what have you.

But I have deep reser­va­tions about this whole ‘dou­ble-crux’ busi­ness, to say the least; and I have com­mented on this point, here on Less Wrong, and have not seen it es­tab­lished to my satis­fac­tion that the tech­nique is all that use­ful or in­ter­est­ing—and most as­suredly have not seen any ev­i­dence that it ought to be taken as part of some “ra­tio­nal­ist canon”, which you may rea­son­ably ex­pect any other ‘ra­tio­nal­ist’ to en­dorse.

Now, you did say that you’d feel in­furi­ated by hav­ing dou­ble-crux fail in ei­ther of those spe­cific ways, so per­haps you would be ok with dou­ble-crux failing in any other way at all? But this does not seem likely to me; and, in any case, my own ob­jec­tion to the tech­nique is similar to what you de­scribe as the ‘ra­tio­nal­ist-ad­ja­cent’ re­sponse (but differ­ent, of course, in that my ob­jec­tion is a prin­ci­pled one, rather than any mere un­re­flec­tive lack of in­ter­est in ex­am­in­ing be­liefs too closely).

Lest you take this com­ment to be merely a stream of grum­bling to no pur­pose, let me ask you this: is the bit about dou­ble-crux meant to be merely an ex­am­ple of a gen­eral ten­dency (of which many other ex­am­ples may be found) for Less Wrong site/​com­mu­nity mem­bers to fail to en­dorse the var­i­ous foun­da­tional con­cepts and tech­niques of ‘LW-style’ ra­tio­nal­ity? Or, is the failure of dou­ble-crux in­deed a cen­tral con­cern of yours, in writ­ing this post? How im­por­tant is that part of the post, in other words? Is the rest writ­ten in the ser­vice of that com­plaint speci­fi­cally? Or is it sep­a­rable?

• There’s a big differ­ence be­tween a per­son who says dou­ble-crux­ing is a bad tool and they don’t want to use it, and some­one who agrees to it but then turns out not to ac­tu­ally be Do­ing The Thing.

And it’s not that abil­ity-to-dou­ble-crux is syn­ony­mous with ra­tio­nal­ity, just that it’s the best proxy I could think of for what a typ­i­cal frus­trat­ing in­ter­ac­tion on this site is miss­ing. Maybe I should spec­ify that.

• I would haz­ard a guess that you might have writ­ten the same com­ment with “de­bate” or “dis­cus­sion” in­stead of dou­ble-crux, if dou­ble­crux hadn’t been in­vented. Dou­ble crux is one par­tic­u­lar way to re­solve a dis­agree­ment, but I think the is­sue of “not will­ing to zoom in on be­liefs” and “switch­ing frames mid-con­ver­sa­tion” come up in in other con­ver­sa­tional paradigms.

(I’m not sure whether Said would have re­lated ob­jec­tions to “zoom­ing in on be­liefs” or “switch­ing frames” be­ing Things Worth Do­ing, but seemed worth ex­am­in­ing the dis­tinc­tion)

• I think it would be very con­no­ta­tively wrong to use those. I re­ally need to say “the kind of con­ver­sa­tion where you can ex­am­ine claims to­gether, and both par­ties are play­ing fair and try­ing to raise their true ob­jec­tions and not mov­ing the goal­posts”, and “dou­ble-crux” points at a sub­set of that. It doesn’t liter­ally have to be dou­ble-crux, but it would take a new defi­ni­tion in or­der to have a han­dle for that, and three defi­ni­tions in one post is already kind of push­ing it.

Any bet­ter ideas?

• “Col­lab­o­ra­tive truth-seek­ing”?

There are ra­tio­nal­ist-ad­ja­cents for whom col­lab­o­ra­tive truth-seek­ing on many top­ics would fail be­cause they’re not in­ter­ested in zoom­ing in so close on a be­lief. There are post-ra­tio­nal­ists for whom col­lab­o­ra­tive truth-seek­ing would fail be­cause they can just switch frames on the con­ver­sa­tion any time they’re feel­ing stuck. And to try to col­lab­o­rate on truth-seek­ing with some­one, only to have it fail in ei­ther of those ways, is an in­furi­at­ing feel­ing for those of us who thought we could take it for granted in the com­mu­nity.
• Gotcha. I don’t know of a good word for the su­per-set that in­cludes dou­ble­crux but I see what you’re point­ing at.

• Un­less I am mi­s­un­der­stand­ing, wouldn’t or­thonor­mal say that “switch­ing frames” is ac­tu­ally a thing not to do (and that it’s some­thing post-ra­tio­nal­ists do, which is in con­flict with ra­tio­nal­ist ap­proaches)?

• I be­lieve the claim he was mak­ing (which I was en­dors­ing), was to not switch frames in the mid­dle of a con­ver­sa­tion in a sort of slip­pery goal-post-mov­ing way (es­pe­cially re­peat­edly, with­out stop­ping to clar­ify that you’re do­ing that). That can re­sult in poor com­mu­ni­ca­tion.

I’ve pre­vi­ously talked a lot about notic­ing frame differ­ences, which in­cludes notic­ing when it’s time to switch frames, but within the ra­tio­nal­ist paradigm, I’d ar­gue this is a thing you should do in­ten­tion­ally when it’s ap­pro­pri­ate for the situ­a­tion, and flag when you’re do­ing it, and make sure that your in­ter­locu­tor un­der­stands the new frame.

• I agree with this com­ment.

The ra­tio­nal­ist way to han­dle mul­ti­ple frames is to ei­ther treat them as differ­ent use­ful heuris­tics which can out­perform naively op­ti­miz­ing from your known map, or as differ­ent hy­pothe­ses for the cor­rect gen­eral frame, rather than as tac­ti­cal gam­bits in a dis­agree­ment.

• There’s a set of post-ra­tio­nal­ist norms where switch­ing frames isn’t a con­ver­sa­tional gam­bit, it’s ex­pected and part of gen­er­a­tive pro­cess for solv­ing prob­lems and cre­at­ing close­ness. I would love to see peo­ple be able to switch be­tween these differ­ent types of norms, as it can be equally frus­trat­ing when you’re try­ing to vibe with peo­ple who can only op­er­ate through ra­tio­nal­ist frames.

• (just wanted to say I ap­pre­ci­ated the way you put forth this com­ment – speci­fi­cally flag­ging a dis­agree­ment while check­ing in on how cen­tral it was)

• In terms of con­ver­sa­tion style, I’d define a “ra­tio­nal­ist” as some­one who’s against non-fac­tual ob­jec­tions to fac­tual claims: “you’re not an ex­pert”, “you’re mo­ti­vated to say this”, “you’re friends with the wrong peo­ple”, “your claim has bad con­se­quences” and so on. An in­ter­me­di­ate stage would be “grudg­ing ra­tio­nal­ist”: some­one who can re­frain from us­ing such ob­jec­tions if asked, but still listens to them, and re­lapses to us­ing them when among non-ra­tio­nal­ists.

• “You’re not an ex­pert” is a valid Bayesian ob­jec­tion, so I’d ac­cept it. But it can be screened off with more di­rect ev­i­dence. I would be against it if some­one per­sists in an ob­jec­tion that is no longer valid.

• I’m not sure our defi­ni­tions are the same, but they’re very highly cor­re­lated in my ex­pe­rience.

• A ra­tio­nal­ist, in the sense of this par­tic­u­lar com­mu­nity, is some­one who is try­ing to build and up­date a unified prob­a­bil­is­tic model of how the en­tire world works, and try­ing to use that model to make pre­dic­tions and de­ci­sions.

… the en­tire world? As far as I can tell, the vast ma­jor­ity of ra­tio­nal­ists would like to have an ac­cu­rate prob­a­bil­is­tic model of the en­tire world, but are only try­ing to main­tain and up­date small-ish rele­vant parts of it. For ex­am­ple, I have (as far as I can re­call) never tried to build any model of Con­golese poli­tics (un­til writ­ing this sen­tence, ac­tu­ally, when I stopped to con­sider what I do be­lieve about it), nor tried to prop­a­gate what I know that re­lates to Con­golese poli­tics (which is non-zero) to other top­ics.

• You have a prior on Con­golese poli­tics, which draws from causal nodes like “cen­tral Africa”, “post-colo­nial­ism”, and the like; the fact that your model is un­cer­tain about it (un­til you look any­thing up or even try to re­call rele­vant de­tails) doesn’t mean your model is mute about it. It’s there even be­fore you look at it, and there’s been no need to put spe­cial effort into it be­fore it was rele­vant to a ques­tion or de­ci­sion that mat­tered to you.

I’m just say­ing that ra­tio­nal­ists are try­ing to make one big map, with re­gions filled in at differ­ent rates (and we won’t get around to ev­ery­thing), rather than try­ing to make sep­a­rate map-is­te­ria.

• I agree that my global map con­tains a re­gion for Con­golese poli­tics. What I’m say­ing is that I’m not try­ing to main­tain that bit of the map, or up­date it based on new info. But I guess as long as the whole map is global and I’m try­ing to up­date the global map, that suffices for your defi­ni­tion?

• It does.

• I wish I’d re­mem­bered to in­clude this in the origi­nal post (and it feels wrong to slip it in now), but Scott Aaron­son neatly par­alleled my dis­tinc­tion be­tween ra­tio­nal­ists and post-ra­tio­nal­ists when dis­cussing in­ter­pre­ta­tions of quan­tum me­chan­ics:

But the ba­sic split be­tween Many-Wor­lds and Copen­hagen (or bet­ter: be­tween Many-Wor­lds and “shut-up-and-calcu­late” /​ “QM needs no in­ter­pre­ta­tion” /​ etc.), I re­gard as com­ing from two fun­da­men­tally differ­ent con­cep­tions of what a sci­en­tific the­ory is sup­posed to do for you. Is it sup­posed to posit an ob­jec­tive state for the uni­verse, or be only a tool that you use to or­ga­nize your ex­pe­riences?

Scott tries his best to give a not-an­swer and be done with it, which is in keep­ing with my cat­e­go­riza­tion of him as a promi­nent ra­tio­nal­ist-ad­ja­cent.

• Wait. Does it mean that, given that I pre­fer in­stru­men­tal­ism over re­al­ism in meta­physics and Copen­hagen over MWI in QM (up to some nu­ances), I am a post-ra­tio­nal­ist now? That doesn’t feel right. I don’t be­lieve that the ra­tio­nal­ist pro­ject is “mis­guided or im­pos­si­ble”, un­less you use a very nar­row defi­ni­tion of the “ra­tio­nal­ist pro­ject”. Here and here I defended what is ar­guably the core of the ra­tio­nal­ist pro­ject.

• It’s not the same dis­tinc­tion, but I ex­pect it’s cor­re­lated. The cor­re­la­tion isn’t strong enough to pre­vent peo­ple from be­ing in the other quad­rants, of course. :-)

• if there’s a do­main where the model gives two in­com­pat­i­ble pre­dic­tions, then as soon as that’s no­ticed it has to be rec­tified in some way.

What do you mean by “rec­tified”, and are you sure you mean “rec­tified” rather than, say, “flagged for at­ten­tion”? (A bounded ap­prox­i­mate Bayesian ap­proaches con­sis­tency by try­ing to be ac­cu­rate, but doesn’t try to be con­sis­tent. I be­lieve ‘im­me­di­ately up­date your model some­how when you no­tice an in­con­sis­tency’ is a bad policy for a hu­man [and part of a weak-man ver­sion of ra­tio­nal­ism that harms peo­ple who try to fol­low it], and I don’t think this be­lief is op­posed to “ra­tio­nal­ism”, which should only re­quire not in­definitely tol­er­at­ing in­con­sis­tency.)

• The next para­graph ap­plies there: you can rec­tify it by say­ing it’s a con­flict be­tween hy­pothe­ses /​ heuris­tics, even if you can’t get solid ev­i­dence on which is more likely to be cor­rect.

Cases where you no­tice an in­con­sis­tency are of­ten juicy op­por­tu­ni­ties to be­come more ac­cu­rate.

• I feel like this fails to cap­ture some im­port fea­tures of each of these cat­e­gories as they ex­ist in my mind.

• The ra­tio­nal­ist cat­e­gory can rea­son­ably in­cludes peo­ple who are not build­ing unified prob­a­bil­is­tic mod­els, even if LW-style ra­tio­nal­ists are Bayesi­ans, be­cause they ap­ply similarly struc­tured episte­molog­i­cal meth­ods even if their spe­cific meth­ods are differ­ent.

• The post-ra­tio­nal­ist cat­e­gory can be talked about con­struc­tively, al­though it’s a bit hard to do this in a way that satis­fies ev­ery­one, es­pe­cially ra­tio­nal­ists, be­cause it re­quire giv­ing up com­mit­ment to a sin­gle on­tol­ogy as the only right on­tol­ogy.

• The post-ra­tio­nal­ist cat­e­gory can be talked about con­struc­tively, al­though it’s a bit hard to do this in a way that satis­fies ev­ery­one, es­pe­cially ra­tio­nal­ists, be­cause it re­quire giv­ing up com­mit­ment to a sin­gle on­tol­ogy as the only right on­tol­ogy.

What’s the dis­tinc­tion be­tween this de­scrip­tion, and the way or­thonor­mal de­scribed post-ra­tio­nal­ists?

• I see their de­scrip­tion as set up against the defi­ni­tion of ra­tio­nal­ist, so an elimi­na­tive de­scrip­tion that says more about what it is not than what it is.

• Like “non-Evan­gel­i­cal Protes­tant”, a la­bel can be use­ful even if it’s defined as “mem­ber of this big cluster but not a mem­ber of this or that ma­jor sub­cluster”. It can even have more unity on many fea­tures than the big cluster does.

• The post-ra­tio­nal­ist cat­e­gory can be talked about con­struc­tively, al­though it’s a bit hard to do this in a way that satis­fies ev­ery­one, es­pe­cially ra­tio­nal­ists, be­cause it re­quire giv­ing up com­mit­ment to a sin­gle on­tol­ogy as the only right on­tol­ogy.

To clar­ify syn­tax here, and then to ask the ap­pro­pri­ate fol­low up ques­tion… do you mean more like

• In or­der to give a satis­fy­ing con­struc­tive defi­ni­tion of post ra­tio­nal­ists, one must give up com­mit­ment to a sin­gle ontology

(Which would be sur­pris­ing to me—could you elab­o­rate?)

or more like

• The con­struc­tive defi­ni­tion of post ra­tio­nal­ist would in­clude some­thing like “a per­son who has given up com­mit­ment to a sin­gle on­tol­ogy”

(In which case I don’t see why it would be hard to give that defi­ni­tion in a way that satis­fies ra­tio­nal­ists?)

• To some ex­tent I mean both things, though more the former than the lat­ter.

I’ll give a di­rec­tion an­swer, but first con­sider this not perfect com­par­i­son that I think gives some fla­vor of how it seemed to me the OP is ap­proach­ing the post-ra­tio­nal­ist cat­e­gory such that it might evoke the feel­ing in a self-iden­ti­fied ra­tio­nal­ist the sort of feel­ing a post-ra­tio­nal­ist would have see­ing them­selves ex­plained the way they are here.

Let’s give a defi­ni­tion for a pre-ra­tio­nal­ist that some­one who was a pre-ra­tio­nal­ist would en­dorse. They wouldn’t call them­selves a pre-ra­tio­nal­ist, of course, more likely they’d call them­selves some­thing like a nor­mal, func­tion­ing adult. They might de­scribe them­selves like this, in re­la­tion to episte­mol­ogy:

They then might de­scribe a ra­tio­nal­ist like this:

A ra­tio­nal­ist is some­one who be­lieves cer­tain kinds of or ways of know­ing truth are in­valid, only spe­cial meth­ods can be used to find truth, and other kinds of truths are not real.

There’s a lot go­ing on here. The pre-ra­tio­nal­ist is fram­ing things in ways that make sense to them, which is fair, but it also means they are some­what un­fair to the ra­tio­nal­ist be­cause in their heart what they see is some an­noy­ing per­son who re­jects things that they know to be true be­cause it doesn’t fit within some sys­tem that the ra­tio­nal­ist, from the pre-ra­tio­nal­ist’s point of view, made up. They see the ra­tio­nal­ist as a per­son dis­con­nected from re­al­ity and tied up in their spe­cial no­tion of truth. Com­pare the way that to a non-ra­tio­nal­ist out­sider ra­tio­nal­ists can ap­pear ar­ro­gant, ideal­is­tic, fool­ish, un­emo­tional, etc.

I ul­ti­mately think some­thing similar is go­ing on here. I don’t think this is mal­i­cious, only that or­thonor­mal doesn’t have an in­side view of what it would mean to be a post-ra­tio­nal­ist and so offers a defi­ni­tion that is defined in re­la­tion to be­ing a ra­tio­nal­ist, just as a pre-ra­tio­nal­ist would offer a defi­ni­tion of ra­tio­nal­ist set up in con­trast to their no­tion of what it is to be “nor­mal”.

So yes I do mean that “in or­der to give a satis­fy­ing con­struc­tive defi­ni­tion of post ra­tio­nal­ists, one must give up com­mit­ment to a sin­gle on­tol­ogy” be­cause this is the only way to give such a defi­ni­tion from the in­side and have it make sense.

I think the prob­lem is ac­tu­ally worse than this, which is why I haven’t proffered my own defi­ni­tion here. I don’t think there’s a clean way to draw lines around the post-ra­tio­nal­ist cat­e­gory and have it cap­ture all of what a post-ra­tio­nal­ist would con­sider im­por­tant be­cause it would re­quire mak­ing dis­tinc­tions that are in a cer­tain sense not real, but in a cer­tain sense are. You might say that the post-ra­tio­nal­ist po­si­tion is ul­ti­mately a non-dual one as way of point­ing vaguely in the di­rec­tion of what I mean, but it’s not that helpful a poin­ter be­cause it also is only a use­ful one if you have some ex­pe­rience to ground what that means.

So if I re­ally had to try to offer a con­struc­tive defi­ni­tion, it would look some­thing like a poin­ter to what it is like to think in this way so that you could see it for your­self, but you’d have to do that see­ing all on your own, not through my words, it would be highly con­tex­tu­al­ized to fit the per­son I was offer­ing the defi­ni­tion to, and in the end it would effec­tively have to make you, at least for a mo­ment, into a post-ra­tio­nal­ist, even if be­yond that mo­ment you didn’t con­sider your­self one.

Now that I’ve writ­ten all this, I re­al­ize this post in it­self might serve as such a poin­ter to some­one, though not nec­es­sar­ily you, philh.

• Also:

The ra­tio­nal­ist cat­e­gory can rea­son­ably in­cludes peo­ple who are not build­ing unified prob­a­bil­is­tic mod­els, even if LW-style ra­tio­nal­ists are Bayesi­ans, be­cause they ap­ply similarly struc­tured episte­molog­i­cal meth­ods even if their spe­cific meth­ods are differ­ent.

I think this is the part of the post where or­thonor­mal is ex­plic­itly draw­ing a bound­ary that isn’t yet con­sen­sus. (So, yes, there’s prob­a­bly a dis­agree­ment here).

I think there is a mean­ingful cat­e­gory of “peo­ple who use similarly struc­tured episte­molog­i­cal meth­ods, with­out nec­es­sar­ily hav­ing a unified prob­a­bil­ity model.” There’s a sep­a­rate, smaller cat­e­gory of “peo­ple do­ing the unified prob­a­bil­is­tic model thing.”

One could ar­gue that ei­ther of those makes sense to call a ra­tio­nal­ist, but you at least need to refer­ence those differ­ent cat­e­gories some­times.

• I found the frame­work in the book “Ac­tion In­quiry” by Bill Tor­bert very helpful in this con­text. In Tor­bert’s frame­work, a typ­i­cal young ra­tio­nal­ist would be in “ex­pert’ mode. There are many good things to be had in later lev­els.

The post-ra­tio­nal­ists, in this view, do not think that ra­tio­nal­ism is wrong but that it has limi­ta­tions. For ex­am­ple a more re­laxed at­ti­tude to the truth of one’s the­o­ries can leave you more open to new in­for­ma­tion and to other ways of see­ing things. A re­cent con­ver­sa­tion I had with a young ra­tio­nal­ist illus­trates this. He crit­i­cised me for deny­ing the ‘sci­ence’ show­ing, in his view, that stat­ins are highly benefi­cial med­i­ca­tions, and felt I was suc­cumb­ing to woo-woo in be­ing scep­ti­cal. I tried to ar­gue that it is not a sim­ple mat­ter of sci­ence ver­sus woo-woo. The sci­en­tific pro­cess is in­fluenced by fi­nan­cial in­cen­tives, ca­reer in­cen­tives, egos, ide­olo­gies, the some­times ex­ces­sive in­fluence of high-sta­tus figures es­pe­cially in medicine; the ideal of open and com­plete pub­li­ca­tion of data and meth­ods and re­sults are by no means met. At the same time one should not as­sume that with 15 min­utes + google you can do bet­ter than a highly trained spe­cial­ist.

• Re: your anec­dote, I in­ter­pret that con­ver­sa­tion as one be­tween a per­son with a more naive view of how the world works and one with a more so­phis­ti­cated un­der­stand­ing. Both peo­ple in such a con­ver­sa­tion, or nei­ther of them, could be ra­tio­nal­ists un­der this frame­work.

• if there’s a do­main where the model gives two in­com­pat­i­ble pre­dic­tions, then as soon as that’s no­ticed it has to be rec­tified in some way.

This feels ex­ces­sive to me, but maybe you didn’t in­tend it as strongly as I in­ter­pret.

I do think it’s the case that if you have in­com­pat­i­ble pre­dic­tions, some­thing is wrong. But I think of­ten the best you can do to cor­rect it is to say some­thing like...

“Okay, this part of my model would pre­dict this thing, and that part would pre­dict that thing, and I don’t re­ally know how to rec­on­cile that. I don’t know which if ei­ther is cor­rect, and un­til I un­der­stand this bet­ter I’m go­ing to pro­ceed with cau­tion in this area, and not trust ei­ther of those parts of my model too much.”

Does that seem like it would satisfy the in­tent of what you wrote?

• Yes, it does. The prob­a­bil­is­tic part ap­plies to differ­ent parts of my model as well as to out­puts of a sin­gle model part.

• This matches my in­ter­pre­ta­tion of the “com­mu­nity”. Per­son­ally I am more of a post-ra­tio­nal­ist type, with the in­stru­men­tal­ist/​anti-re­al­ist bend philo­soph­i­cally, and think that the con­cept of “the truth” is the most harm­ful part of ra­tio­nal­ity teach­ings. Re­plac­ing “true” with “high pre­dic­tive ac­cu­racy” ev­ery­where in the se­quences would be a worth­while ex­er­cise.

• I like this, it’s sim­ple, it re­solved con­cep­tual ten­sions I had, and I will start us­ing this. (Obvs I should check in in a few months to see if this worked out.)

• Thanks, I thought this was use­ful, es­pe­cially di­vid­ing it into 3 cat­e­gories in­stead of two

• A re­lated thing I was think­ing about for some time: Seems to me that the line be­tween “build­ing on X” and “dis­agree­ing with X” is some­times un­clear, and the fi­nal choice is of­ten made be­cause of so­cial rea­sons rather than be­cause of the nat­u­ral struc­ture of the idea-space. (In other words, the ide­ol­ogy is not the com­mu­nity; there­fore the re­la­tions be­tween two ide­olo­gies of­ten do not de­ter­mine the re­la­tions be­tween the re­spec­tive com­mu­ni­ties.)

Imag­ine that there was a guy X who said some wise things: A, B, and C. Later, there was an­other guy Y who said: A, B, C, and D. Now de­pend­ing on how Y feels about X, he could de­scribe his own wis­dom as ei­ther “stand­ing on shoulders of gi­ants, such as X”, or “de­bunk­ing of teach­ings of X, who was fool­ishly ig­no­rant about D”. (Some­times it’s not re­ally Y alone, but rather the fol­low­ers of Y, who make the choice.) Two de­scrip­tions of the same situ­a­tion; very differ­ent con­no­ta­tions.

To give a spe­cific ex­am­ple, is Scott Alexan­der a post-ra­tio­nal­ist? (I am not sure whether he ever wrote any­thing on this topic, but even if he did, let’s ig­nore it com­pletely now, be­cause… well, he could be mis­taken about where he re­ally be­longs.) Let’s try to find out the an­swer based on his on­line be­hav­ior.

There are some similar­i­ties: He writes a blog out­side of LW. He goes against some norms of LW (e.g. he de­bates poli­tics). He is ad­mired by many peo­ple on LW, be­cause he writes things they find in­sight­ful. At the same time, a large part of his au­di­ence dis­agrees with some core LW teach­ings (e.g. all re­li­gious SSC read­ers pre­sum­ably dis­agree with LW tak­ing athe­ism as the ob­vi­ously ra­tio­nal con­clu­sion).

So it seems like he is in a perfect po­si­tion to brand him­self as some­thing that means “kinda like the ra­tio­nal­ists, only bet­ter”. Why didn’t this hap­pen? First, be­cause Scott is not in­ter­ested in do­ing this. Se­cond, be­cause Scott writes about the ra­tio­nal­ist com­mu­nity in a way that doesn’t even al­low his fans (e.g. the large part that dis­agrees with LW) to do this for him. Scott is loyal to the ra­tio­nal­ist pro­ject and com­mu­nity.

If we agree that this is what makes Scott a non-post-ra­tio­nal­ist, de­spite all the similar­i­ties with them, than it pro­vides some in­for­ma­tion about what be­ing a post-ra­tio­nal­ist means. (Essen­tially, what you wrote in the ar­ti­cle.)

• Scott could do all those things and be a ra­tio­nal­ist-ad­ja­cent. He’s a ra­tio­nal­ist un­der my ty­pol­ogy be­cause he shares the sincere yearn­ing and striv­ing for un­der­stand­ing all of the things in one modal­ity, even if he is okay with the util­ity of some­times spend­ing time in other modal­ities. (Which he doesn’t seem to, much, but he re­spects peo­ple who do- he just wants to un­der­stand what’s hap­pen­ing with them.)

• I’m always a bit sus­pi­cious of iden­tity or mem­ber­ship la­bels for hu­mans. There are always over­lap, change-over-time, and bound­ary cases that tend to make me throw up my hands and say “ok, be what­ever you want!”

In this post, I’m con­fused by the phrase “in the sense of this par­tic­u­lar com­mu­nity” for a de­scrip­tion that does not men­tion com­mu­nity. The defi­ni­tion seems to be closer to “ra­tio­nal­ity-seeker” as a per­sonal be­lief and be­hav­ior de­scrip­tion than “ra­tio­nal­ist” as a mem­ber or group ac­tor.

• I’m con­fused by the phrase “in the sense of this par­tic­u­lar com­mu­nity” for a de­scrip­tion that does not men­tion com­mu­nity.

I’m dis­t­in­guish­ing this sense of ra­tio­nal­ist from the philo­soph­i­cal school that long pre­dated this com­mu­nity and has many sig­nifi­cant differ­ences from it. Can you sug­gest a bet­ter way to phrase my defi­ni­tion?

• Is there a work­ing defi­ni­tion for anti-ra­tio­nal­ist?