Why Rationality?

Pre­vi­ously: Re­sponses to Tyler Co­hen on Rationality

Re­cently: Yes, We Have No­ticed the Skulls, When Ra­tion­al­ists Re­made the World

Re­peat as Ne­c­es­sary: Prin­ci­ple of Charity

A lot of peo­ple do not un­der­stand the ap­peal. I was sur­prised to find that Will Wilk­in­son was among them. Tyler shared a tweet-storm of his, which I will re­pro­duce be­low:

1. Some thoughts on the con­tro­versy over the ra­tio­nal­ity com­mu­nity sparked by @tyler­cowen’s off­hand com­ments to @ezrak­lein
2. Avoid­ing the most com­mon er­rors in thought has sur­pris­ingly few per­sonal benefits. Why try?

3. Vigilance against bias and fas­tidious Bayesi­anism has to ap­peal to some­thing in you. But what? Why are you so wor­ried about be­ing wrong?

4. Some ra­tio­nal­ists talk a lot about sta­tus sig­nal­ing and tribe-based con­fir­ma­tion bias. What’s the the­ory of shut­ting this off?

5. Plato’s right. Will to truth is rooted in eros. Long­ing for recog­ni­tion must be trans­lated into a long­ing for a weird se­man­tic re­la­tion.

6. Hume is right. Good things, like rea­son, come from co-opt­ing and re­shap­ing base mo­tives. Rea­son’s en­slaved to the im­pulse that drives it.

7. I see al­most no in­ter­est among ra­tio­nal­ity folks in cul­ti­vat­ing and shap­ing the ara­tional mo­tives be­hind ra­tio­nal cog­ni­tion.

8. Bayes’ Law is much less im­por­tant than un­der­stand­ing what would get some­body to ever care about ap­ply­ing Bayes’ Law.

9. If the rea­son peo­ple don’t is that it would desta­bi­lize their iden­tity and re­la­tion­ships, maybe it’s not so great to be right.

10. Aris­to­tle says a real philoso­pher is in­hu­man.

11. If the nor­mal syn­drome of epistemic ir­ra­tional­ity is in­stru­men­tally ra­tio­nal, then you’ve got to choose your ir­ra­tional­ity.

12. The in­cli­na­tion to choose epistemic ra­tio­nal­ity is ev­i­dence of be­ing bad at life.

13. It’s un­der­stand­able that these peo­ple should want a com­mu­nity in which they can feel bet­ter and maybe even su­pe­rior about this.

14. We all need a home. If that de­sire for safety/​ac­cep­tance/​sta­tus pro­duces more use­fully true thoughts, that’s awe­some.

(His later replies in the thread, which mostly hap­pened af­ter I read it the first time, are also worth look­ing at, and in­di­cate the view that what has been done is good, but in­ad­e­quate to the tasks at hand.)

At first this made me an­gry. I read this as a di­rect at­tack not only on my com­mu­nity, but even on the very idea of truth. How dare we pay care­ful at­ten­tion to truth? We must have baser mo­tives we are ig­nor­ing! We must be losers who are bad at life! We only seek truth so that we can meet up with other truth seek­ers and feel su­pe­rior to the evil out­group that does not suffi­ciently truth-seek!

Who knew the post-truth era started with Plato? I mean, I can see why one might pin the blame on him, but this is a lit­tle ex­treme.

Truth? You want the truth? You can’t han­dle the un­truth! So all right, fine, I guess you can go seek truth, but it won’t help you much. Let us know if you come up with any­thing good.

Then I calmed down and had some time to think about it, and I re­al­ized that if one ap­plies the afore­men­tioned Prin­ci­ple of Char­ity, Will is ask­ing good ques­tions that de­serve real an­swers, and it is not rea­son­able to say ‘it is all there in the se­quences’ any more than it is fair for philoso­phers to an­swer your ques­tions by tel­ling you to read the fol­low­ing five books – it might be good ad­vice in both cases, but it is not es­pe­cially prac­ti­cal and acts to pre­vent di­alogue.

Then Scott wrote his post and a lot of peo­ple com­mented, I had a day to think about things, the thread ex­panded, and I re­al­ized that Will too is offer­ing us high praise. There is no en­emy any­where. A link to­day de­scribed Scott’s post as ‘a defense of the Ra­tion­al­ist com­mu­nity,’ then about ten min­utes af­ter I wrote that, Noah Smith posted a link say­ing sim­ply that he ‘defends ra­tio­nal­ists’ and I sup­pose all that is ac­cu­rate – but no one is at­tack­ing!

Once again, we are be­ing called to a greater mis­sion. Once again, it is an im­por­tant mis­sion, and we should choose to ac­cept it. The twist is, we might not agree on some of the de­tails, but we are already on this mis­sion, and have been for years.

So let’s take it from the top.

2. Avoid­ing the most com­mon er­rors in thought has sur­pris­ingly few per­sonal benefits. Why try?

In re­sponse to some­one else’s rude and snappy an­swer, Will replied “Guess I should have done that 7th year of philos­o­phy grad school. Sigh.” I would ask, what was Will do­ing in philos­o­phy grad school in the first place? Did he not no­tice that go­ing to grad­u­ate school in philos­o­phy has sur­pris­ingly few per­sonal benefits? Why go? I hope it was not for the gen­er­ous benefit pack­ages. I hope and be­lieve that it was be­cause Will wanted to seek knowl­edge, and use that knowl­edge for good (and to figure out what good is). If you want to be a top ex­pert on hu­man hap­piness as it re­lates to pub­lic policy, know­ing how to cor­rect for bias and figure out what is likely to be true seems rather im­por­tant!

Every philos­o­phy stu­dent I have met (sam­ple size ad­mit­tedly not that large, but at least three) gives some ver­sion of the same an­swer, which is that they want to think about im­por­tant things for a liv­ing. Think­ing about the truth, the good, ethics and other similar things seems like some com­bi­na­tion of the most im­por­tant thing they could be do­ing (ei­ther for their own sake, for their use­ful­ness, or both), and the most fun thing they could be do­ing. I agree! That sounds pretty great. I even con­sid­ered go­ing, up un­til that part where I re­al­ized the sur­pris­ing lack of per­sonal benefits. True story.

So here are at least some of my rea­sons:

I prac­tice “vigilance against bias and fas­tidious Bayesi­anism” (here­after I will say ra­tio­nal­ity), in part, be­cause it is to me the most in­ter­est­ing thing. It is fun.

I prac­tice Ra­tion­al­ity, in part, be­cause it is to me an in­trin­sic good. I value epistemic ra­tio­nal­ity and truth for their own sake, and I think ev­ery­one else should too. Truth seek­ing, in my opinion, is an im­por­tant virtue. I doubt Plato or Hume would dis­agree.

I prac­tice Ra­tion­al­ity, in part, be­cause it is to me a use­ful good. In­stru­men­tal ra­tio­nal­ity has helped me im­mensely. It has helped the world im­mensely. Good think­ing is a nec­es­sary con­di­tion for nice things. Pro­duc­ing more use­ful true thoughts re­ally is awe­some! For cer­tain prob­lems, es­pe­cially re­lated to AGI, FAI and the AI Con­trol prob­lem, at least the level of rigor we are cur­rently seek­ing is nec­es­sary or all is lost. Ra­tion­al­ity is and has been very use­ful to me per­son­ally, and very use­ful to the world. I agree with the fact that in gen­eral it has sur­pris­ingly few per­sonal benefits, but part of that is that the benefits are highly situ­a­tion-de­pen­dent (in a way I will talk about more be­low) and part of that is be­cause one would naively ex­pect the benefits to be so amaz­ing. You can get less than you would have ex­pected, and still get far more than you paid for.

I prac­tice Ra­tion­al­ity, in part, be­cause it is my com­mu­nity and my cul­ture. When we talk to each other, on­line or in per­son, in­ter­est­ing con­ver­sa­tion re­sults. When we get to­gether, we have a good time and form deep and last­ing friend­ships. Most of my best friends, I met that way. I met my wife that way! If some­one else also thinks be­ing right is the most in­ter­est­ing thing chances are very good we’ll get along. Th­ese are amaz­ing peo­ple.

I prac­tice Ra­tion­al­ity, in part, be­cause it is my com­par­a­tive ad­van­tage. I am bet­ter at go­ing down this path, rel­a­tive to my abil­ity to go down other paths, com­pared to oth­ers. I do not think hard­core study of Ra­tion­al­ity is for ev­ery­one, or is for ev­ery­one who is ‘smart enough’ or any­thing like that. Some of you should do one thing, and some of you should do the other. I even have thoughts on who is who.

I even oc­ca­sion­ally prac­tice ra­tio­nal­ity be­cause some­thing is wrong on the in­ter­net. I am not es­pe­cially proud of that one, but there it is.

3. Vigilance against bias and fas­tidious Bayesi­anism has to ap­peal to some­thing in you. But what? Why are you so wor­ried about be­ing wrong?

The in­stinc­tive re­ac­tion is to say “I’m not afraid of be­ing wrong! There are plenty of other good rea­sons!” and there are lots of other good rea­sons, but I’m also rather afraid of be­ing wrong, so among other rea­sons:

I am wor­ried about mak­ing a bad trade or plac­ing a bad bet and los­ing my money, and it also be­ing my fault. I have done these things while try­ing to make a liv­ing, and they in­volved me be­ing wrong. It sucked. Be­ing less wrong would have helped.

I am wor­ried about mak­ing mis­takes and there­fore los­ing when I com­pete in com­pe­ti­tions, such as Magic: The Gather­ing tour­na­ments. I am also afraid of this loss there­fore be­ing my fault. I have done this a lot. I pre­fer the al­ter­na­tive.

I am wor­ried about mak­ing bad choices in life in gen­eral and there­fore los­ing ex­pected util­ity. I do this ev­ery day, so it seems like a rea­son­able thing to worry about. I would rather make bet­ter de­ci­sions.

I am wor­ried about be­ing wrong be­cause be­ing wrong is bad. I at­tach in­trin­sic value to be­ing less wrong, partly be­cause I de­cided a long time ago that do­ing so would lead to bet­ter out­comes and I am a virtue ethics kind of guy, and partly be­cause I was brought up in a con­ser­va­tive Jewish tra­di­tion and I have always felt this way.

I am wor­ried about be­ing wrong be­cause oth­ers are count­ing on me for in­for­ma­tion or ad­vice, and I do not want to let them down, and also I want a rep­u­ta­tion for not be­ing wrong about stuff.

I am wor­ried about us col­lec­tively be­ing wrong be­cause some time in the next cen­tury some­one is likely go­ing to build an Ar­tifi­cial Gen­eral In­tel­li­gence and if those who do so are wrong about how this works and try to get by with the sloppy think­ing and sys­tem of kludges that is the de­fault hu­man way of think­ing, we will fail at the con­trol prob­lem and then we are all go­ing to die and all value in the uni­verse is go­ing to be de­stroyed. I would re­ally pre­fer to avoid that.

Note that all of these, and es­pe­cially this last one, while cer­tainly suffi­cient mo­ti­va­tion for some, is in no way nec­es­sary. Any of the rea­sons will do fine, as will other mo­tives, such as sim­ply think­ing that try­ing to be less wrong is the most in­ter­est­ing thing one can do with their day.

4. Some ra­tio­nal­ists talk a lot about sta­tus sig­nal­ing and tribe-based con­fir­ma­tion bias. What’s the the­ory of shut­ting this off?

Th­ese are some pretty big prob­lems. Shut­ting them off en­tirely is not prac­ti­cal, at least not that I can see, but there are rea­son­able steps one can take to min­i­mize the im­pact of these prob­lems, and talk­ing a lot about them is a key part of those plans.

Tribe-based con­fir­ma­tion bias, like all other bi­ases, can be con­sciously (partly) cor­rected for, if one is aware of what it is and how it works. The in­ter­net is full of talk about how one should go out into the world and read the peo­ple who dis­agree with you poli­ti­cally, ad­vice which a sadly small num­ber of peo­ple take to heart, and which if any­thing should go farther than the damnable other poli­ti­cal party. The thing is, the trick kind of works, if you sam­ple a lot of opinions, con­sider other per­spec­tives, and think about why you and your tribe be­lieve the things you be­lieve. It works even bet­ter when one of your tribe’s val­ues is point­ing out when this is hap­pen­ing. If you get into the habit of look­ing for sta­tus-quo bias, and think­ing “sta­tus-quo bias!” when­ever you see it, and oth­ers around you help point it out, you don’t kill it en­tirely, but you get to catch your­self more of­ten be­fore fal­ling prey, and do at least par­tial cor­rec­tions. Tribe-based con­fir­ma­tion bias is not any differ­ent. Aware­ness and con­stant vigilance, and good norms to pro­mote both, are the best tools we know about – if you can do bet­ter, let us know! If any­thing, we as a species are not talk­ing about it enough be­cause it seems to be one of the more dam­ag­ing bi­ases right now.

Sta­tus sig­nal­ing is im­por­tant to dis­cuss on sev­eral lev­els.

As Robin Han­son never tires of point­ing out, sta­tus sig­nal­ing is a large part of the mo­ti­va­tion of hu­man ac­tivity. If you do not un­der­stand that this game ex­ists and that ev­ery­one is play­ing it, or you do not un­der­stand how it works, you are re­ally screwed.

In terms of your own ac­tions, you will lose the game of life, and not un­der­stand how or why. Yes, most hu­mans have an in­stinc­tive un­der­stand­ing of these dy­nam­ics and how to nav­i­gate them, but many of us do not have such good in­stincts, and even those with good in­stincts can im­prove. We can learn to see what is go­ing on, be aware of the game, play the game and even more im­por­tantly play the bet­ter game of avoid­ing play­ing the game. Hu­mans will make sta­tus moves au­to­mat­i­cally, un­con­sciously, all the time, by de­fault. If you want to avoid sta­tus ru­in­ing ev­ery­thing, you have to think con­stantly about how to min­i­mize its im­pact, and this pretty much has to be ex­plicit. Sig­nal­ing can­not be com­pletely defeated by the con­scious effort of those in­volved, but the dam­age can some­times be con­tained.

There is also the fact that it is bad for your sta­tus to be too ob­vi­ously or crassly sig­nal­ing your sta­tus. Talk­ing about sta­tus can raise the costs and lower the benefits of sta­tus sig­nal­ing. In some cases, talk­ing about sig­nal­ing can make the sig­nals you are talk­ing about stop work­ing or back­fire!

You can also de­sign your sys­tems, and the in­cen­tives they cre­ate, know­ing what sta­tus sig­nals they en­able and en­courage, and do your best to make those pos­i­tive and pro­duc­tive ac­tions, and to make the sig­nals yield pos­i­tive out­comes. This is a lot of what ‘good norms and in­sti­tu­tions’ is ac­tu­ally about. Your sta­tus mark­ers are a large part of your in­cen­tive struc­ture. You’re stuck with sta­tus, so talk over how best to use it, and put it to work. It can even be good to ex­plic­itly say “we will award sta­tus to the peo­ple who bring snacks” if ev­ery­one is used to think­ing on that level. We say “to honor Amer­ica” be­fore bal­lgames, so it isn’t like reg­u­lar peo­ple are that differ­ent. It may sound a lit­tle crass, but it to­tally works.

If one does not think hard and ex­plic­itly about sta­tus sig­nal­ing and re­lated is­sues when think­ing about the hap­piness im­pact of pub­lic policy, this seems like a huge mis­take.

Th­ese two prob­lems even tie to­gether. A lot of your tribe-based bias is your tribe’s opinion of what is higher or lower in sta­tus, or what should be higher or lower in sta­tus, or what sig­nals raise and lower sta­tus. Talk­ing ex­plic­itly about this al­lows you to bet­ter see from the other tribe’s per­spec­tive and see whether they have a point. Treat­ing that all as given goes re­ally badly. So talk about all of it!

If there are ideas on how to do bet­ter than that, great, let’s talk about those too. Th­ese are hard and im­por­tant prob­lems, and I don’t see how one can hope to solve them if you’re not will­ing to talk about them.

Alter­na­tively, if the ques­tion was, why would shut­ting this off be good, I can site the SNAFU prin­ci­ple, or the lived ex­pe­rience that con­tain­ing the sta­tus prob­lem leads to much hap­pier hu­mans.

A last note is that un­der some cir­cum­stances, be­ing un­aware of sta­tus is a su­per­power. You get to do the thing that can­not be done, be­cause you do not re­al­ize why you can­not do it. I have even taken steps to pro­tect peo­ple from this knowl­edge on oc­ca­sions when I felt it would do harm. Know­ing and not car­ing at all would also work, but it rather in­hu­man (see point 10!). But this is the ex­cep­tion, not the rule.

5. Plato’s right. Will to truth is rooted in eros. Long­ing for recog­ni­tion must be trans­lated into a long­ing for a weird se­man­tic re­la­tion.

I dis­agree, un­less we are tak­ing the ‘ev­ery­thing is about eros’ model of the world, in which case state­ment does not mean much. I never re­ally ‘got’ Plato and ended up not read­ing much of him, I felt he was por­tray­ing Socrates fre­quently us­ing defi­ni­tion-based traps on his less-than-brilli­ant de­bate part­ners, but I do feel like I get the So­cratic method in prac­tice and I do not un­der­stand why eros is in­volved – I no­tice I am con­fused and would ap­pre­ci­ate an ex­pla­na­tion. Will to truth can be and of­ten is prac­ti­cal, it can and should be the cu­ri­os­ity and playful­ness of lu­dos, and eros is of­ten truth’s en­emy num­ber one.

The times when my mind is fo­cused on eros are, if any­thing, ex­actly the times it is not es­pe­cially wor­ried about truth, and most me­dia seems to strongly back this up.

When it comes to what type of love is the love of truth, I think of the six Greek words for love, I would put eros at least be­hind lu­dos and philia, and prob­a­bly also agape. I could be con­vinced to put it ahead of pragma, and it pre­sum­ably beats out philau­tica, but all six seem like they work.

The more time I spend think­ing about this point, the more I no­tice I am con­fused by Plato’s per­spec­tive here, and I would love some help un­der­stand­ing it, as my in­stinc­tual and re­flec­tive read­ings of his point do not seem to make any sense.

I also don’t see this as a prob­lem if Plato is right, since I already know about how much and how well peo­ple rea­son, so it wouldn’t be bad news or any­thing. But I am definitely con­fused.

6. Hume is right. Good things, like rea­son, come from co-opt­ing and re­shap­ing base mo­tives. Rea­son’s en­slaved to the im­pulse that drives it.

I agree that this is the de­fault situ­a­tion. I view it as some­thing to over­come.

7. I see al­most no in­ter­est among ra­tio­nal­ity folks in cul­ti­vat­ing and shap­ing the ara­tional mo­tives be­hind ra­tio­nal cog­ni­tion.

It is not rea­son­able to ex­pect out­siders to know what in­sid­ers are up to be­fore crit­i­ciz­ing them, so I will sim­ply point out, as Scott did, that this takes up a huge per­centage of our cog­ni­tive efforts. Some­one not notic­ing is a fact about their failure to no­tice, rather than a fact about the our lack of in­ter­est or lack of ac­tion.

We are very aware that we are run­ning on cor­rupted hard­ware. At­tempt­ing ex­plic­itly to get your­self to do what you log­i­cally con­clude you should do, and of­ten failing, helps a lot in rec­og­niz­ing the prob­lem. A lot of us work hard to bat­tle akra­sia. Avoid­ing ra­tio­nal­iza­tion is an en­tire se­quence. We study evolu­tion­ary psy­chol­ogy. We rec­og­nize that we are adap­ta­tion ex­ecu­tors so we seek to figure out good habits to in­stall, and then to in­stall good habits that will help us achieve our goals, only some of which in­volve be­ing ra­tio­nal. There is a lot of im­plicit recog­ni­tion of virtue ethics fol­low­ing from con­se­quen­tial­ism, even if 60% iden­ti­fied as util­i­tar­ian on the Less Wrong sur­vey.

If any­thing, I see our com­mu­nity as pay­ing much more at­ten­tion than most oth­ers to how to shape our ara­tional mo­tives, rather than less at­ten­tion. That ap­plies to sources of ir­ra­tional ac­tion and cog­ni­tion, so they can be over­come, and also to sources of ra­tio­nal cog­ni­tion, so they can be en­couraged, en­abled and im­proved.

8. Bayes’ Law is much less im­por­tant than un­der­stand­ing what would get some­body to ever care about ap­ply­ing Bayes’ Law.

Se­ri­ously? I per­son­ally have tried to figure out what would get peo­ple to care about Bayes’ Law, as well as try­ing to get peo­ple to care about Bayes’ Law (and try­ing to teach Bayes’ Law). We are pretty much the only game in town when it comes to try­ing to get peo­ple to care about Bayes’ Law. Not ev­ery­one cares sim­ply be­cause this is what prob­a­bil­ity and truth ac­tu­ally is. You need a bet­ter ar­gu­ment than that. We teach and preach the good word of Bayes’ Law, so to do that we try to write new and bet­ter ex­pla­na­tions and jus­tifi­ca­tions for it. We kind of do this all the time. We do it so much it some­times gets rather ob­nox­ious. I challenge any­one to name three peo­ple out­side the Ra­tion­al­ist com­mu­nity, who have put more effort into figur­ing out what would get some­body to ever care about ap­ply­ing Bayes’ Law, or more effort into get­ting them to ac­tu­ally care, than Eliezer Yud­kowski, and if you do, provide their con­tact in­for­ma­tion, be­cause I want to know how it went.

If any­one has good ideas, please share them, be­cause I agree this is a to­tally, re­ally im­por­tant prob­lem!

(It also seems log­i­cally im­pos­si­ble for Bayes’ Law to be less im­por­tant than get­ting some­one to care about ap­ply­ing it, but yes, it can and might be the harder part of cap­tur­ing the benefits than learn­ing to ap­ply it)

10. Aris­to­tle says a real philoso­pher is in­hu­man.

Funny you should men­tion that. There is a good chance that soon, we will have Ar­tifi­cial Gen­eral In­tel­li­gence, and if we do, it bet­ter be a real and damn good philoso­pher. The al­ter­na­tive, if we fail to achieve this, is likely to be a uni­verse with zero value. This is why we some­times say that philos­o­phy has a dead­line. Alter­na­tively, we are busily cre­at­ing au­to­mated sys­tems now that, while only nar­row AIs, are op­ti­miz­ing our world for things we may not want, along with things like pub­lic policy. If our real philoso­phers, our most im­por­tant philoso­phers, are not hu­man, it seems im­por­tant that they get their philos­o­phy right, and that means we have to do this work now. Think­ing about these prob­lems very care­fully is re­ally im­por­tant.

I would also say that if you are a hu­man do­ing philos­o­phy, and it re­sults in you be­com­ing in­hu­man, you prob­a­bly need to do bet­ter philos­o­phy!

9. If the rea­son peo­ple don’t is that it would desta­bi­lize their iden­tity and re­la­tion­ships, maybe it’s not so great to be right.
11. If the nor­mal syn­drome of epistemic ir­ra­tional­ity is in­stru­men­tally ra­tio­nal, then you’ve got to choose your ir­ra­tional­ity.

12. The in­cli­na­tion to choose epistemic ra­tio­nal­ity is ev­i­dence of be­ing bad at life.

YES! AGREED! The Way is not for ev­ery­one. It is for those who wish to seek it, and for those who wish it. It is not a uni­ver­sal path that ev­ery­one must adopt, for it is a hard path.

It is the writer who tells the class that if you can imag­ine your­self do­ing any­thing other than writ­ing, then you should write, but you should not be a writer.

It is the mas­ter who looks at the stu­dent and tells her to go home, she is not wel­come here, un­til she per­sists and con­vinces the mas­ter oth­er­wise.

Those of us who are here, and have come far enough along the path, are not a ran­dom sam­ple. We were warned, or should have been, and we per­sisted.

Ra­tion­al­ity is a dis­tinct path for figur­ing out how the world works. Without it, one’s brain is a sys­tem of kludges that mostly get rea­son­able an­swers. To im­prove that on the mar­gin, you add an­other kludge. Some of those even in­volve learn­ing Bayes’ Rule, or learn­ing about some of the bi­ases, in ways we have found to work on their own, com­plete with tricks to get bet­ter out­comes.

To make our­selves all that we can be, you need to dis­man­tle and throw out large parts of the code base and start again. That is not a fast solu­tion. That is what ra­tio­nal­ity does.

It says, we are go­ing to figure things out from first prin­ci­ples. We are go­ing to log­i­cally de­duce what is go­ing on. We will say ex­plic­itly what oth­ers only say im­plic­itly, so we must have the free­dom and a space to do so. Ques­tion ev­ery­thing. Your punches might still bounce off ev­ery­thing, but the im­por­tant thing is to work on your form un­til your hand bleeds. Rather than in­stinc­tively play­ing sta­tus games, or aiming the bat vaguely at the ball, think about the un­der­ly­ing mechanisms. Learn the rules, learn the rules for learn­ing the rules, and, if you need to, num­ber them. When you get it wrong, figure out why. When you get it right, figure that out too. Make be­com­ing stronger your high­est virtue. Go meta all the time, per­haps too much (bet­ter too much than not enough, but if you dis­agree, let’s talk about that…).

Even­tu­ally, you come out stronger, faster, bet­ter, than the ones who never re­built their sys­tems. But the pro­cess of up­grad­ing is not fast. For many it kind of sucks, al­though for those for whom figur­ing out how things work is the in­ter­est­ing thing, you get to en­joy the pro­cess.

Think of this as a ver­sion of the re­source curse. You go through that pro­cess be­cause the old model wouldn’t cut it. If you can’t chop wood and carry wa­ter, per­haps it is time to seek en­light­en­ment.

If your life is already go­ing pretty much OK, hav­ing an illog­i­cal think­ing pro­cess does not es­pe­cially bother you, and you do not have some­thing to pro­tect, why would you go through all that?

You wouldn’t. And you shouldn’t! The nor­mal syn­drome of epistemic ir­ra­tional­ity is a good enough ap­prox­i­ma­tion for you. Pick up a few tricks, some defenses against those who would ex­ploit you, a link to how to buy low-cost ETFs or mu­tual funds, and get on with your life! You’ve got things to do and slack in the sys­tem.

If tak­ing the already long and hard path would also de­stroy your re­la­tion­ships or threaten your iden­tity, that’s also a damn good rea­son to turn aside. Maybe it’s still worth it. Maybe you think truth is more im­por­tant. Maybe you have some­thing to pro­tect. Prob­a­bly not, though. And that is fine!

13. It’s un­der­stand­able that these peo­ple should want a com­mu­nity in which they can feel bet­ter and maybe even su­pe­rior about this.

14. We all need a home. If that de­sire for safety/​ac­cep­tance/​sta­tus pro­duces more use­fully true thoughts, that’s awe­some.

Amen, brother. Truth re­quires good in­sti­tu­tions and norms sup­port­ing it. If ev­ery time you choose the truth over the con­ve­nient, the weird over the nor­mal, you took a so­cial hit, your good habits are in a lot of trou­ble. Is it right for us to feel su­pe­rior?

Well, yeah. It is. That is how you build norms and in­sti­tu­tions that re­in­force the things you want. You value them, and the peo­ple who have them get to feel su­pe­rior, thus en­courag­ing those things. And pre­sum­ably you think more of those things is su­pe­rior, so feel­ing that way seems pretty rea­son­able.

Then ev­ery­one else gets all the pos­i­tive ex­ter­nal­ities that come when peo­ple work hard to figure things out, and then work hard to tell ev­ery­one all their find­ings.

Others, of course, have said a mix of things in­clud­ing many that are… less friendly. Of course some peo­ple pull out Spock, or point to some ar­ro­gant thing Eliezer Yud­kowsky said once (but, to be fair, prob­a­bly ac­tu­ally be­lieves), or cites an ex­am­ple of util­i­tar­ian math giv­ing the ‘wrong an­swer’, or any num­ber of other things. A lot of it isn’t con­struc­tive or fair. So that means we can take our place among groups like men, women, gen­der non-bi­nary peo­ple, gays, straight peo­ple, black peo­ple, white peo­ple, His­pan­ics, Asi­ans, Jews, Mus­lims, Chris­ti­ans, Athe­ists, Bud­dhists, Democrats, Repub­li­cans, Liber­tar­i­ans, So­cial­ists, Com­mu­nists, Fas­cists, Astronomers, Astrologers, Ar­chi­tects, Teach­ers, Cops, Economists, and fans of var­i­ous ro­man­tic pairings on many differ­ent tele­vi­sion se­ries, among many many oth­ers, in the cat­e­gory of groups that the in­ter­net said a ton of com­pletely ig­no­rant, false, mean and un­helpful things about to­day.

I am fine with that. Bring it on. I have the feel­ing we can han­dle it.


No comments.