The Craft and the Community

This se­quence ran from March to April of 2009 and dealt with the topic of build­ing ra­tio­nal­ist com­mu­ni­ties that could sys­tem­at­i­cally im­prove on the art, craft, and sci­ence of hu­man ra­tio­nal­ity. This is a highly for­ward-look­ing se­quence—not so much an im­me­di­ately com­plete recipe, as a list of ac­tion items and warn­ings for any­one set­ting out in the fu­ture to build a craft and a com­mu­nity.

  • Rais­ing the San­ity Water­line: Be­hind ev­ery par­tic­u­lar failure of so­cial ra­tio­nal­ity is a larger and more gen­eral failure of so­cial ra­tio­nal­ity; even if all re­li­gious con­tent were deleted to­mor­row from all hu­man minds, the larger failures that per­mit re­li­gion would still be pre­sent. Reli­gion may serve the func­tion of an as­phyx­i­ated ca­nary in a coal mine—get­ting rid of the ca­nary doesn’t get rid of the gas. Even a com­plete so­cial vic­tory for athe­ism would only be the be­gin­ning of the real work of ra­tio­nal­ists. What could you teach peo­ple with­out ever ex­plic­itly men­tion­ing re­li­gion, that would raise their gen­eral epistemic wa­ter­line to the point that re­li­gion went un­der­wa­ter?

  • A Sense That More Is Pos­si­ble: The art of hu­man ra­tio­nal­ity may have not been much de­vel­oped be­cause its prac­ti­tion­ers lack a sense that vastly more is pos­si­ble. The level of ex­per­tise that most ra­tio­nal­ists strive to de­velop is not on a par with the skills of a pro­fes­sional math­e­mat­i­cian—more like that of a strong ca­sual am­a­teur. Self-pro­claimed “ra­tio­nal­ists” don’t seem to get huge amounts of per­sonal mileage out of their craft, and no one sees a prob­lem with this. Yet ra­tio­nal­ists get less sys­tem­atic train­ing in a less sys­tem­atic con­text than a first-dan black belt gets in hit­ting peo­ple.

  • Epistemic Vi­cious­ness: An es­say by Gillian Rus­sell on “Epistemic Vi­cious­ness in the Mar­tial Arts” gen­er­al­izes amaz­ingly to pos­si­ble and ac­tual prob­lems with build­ing a com­mu­nity around ra­tio­nal­ity. Most no­tably the ex­treme dan­gers as­so­ci­ated with “data poverty”—the difficulty of test­ing the skills in the real world. But also such fac­tors as the sa­cred­ness of the dojo, the in­vest­ment in teach­ings long-prac­ticed, the difficulty of book learn­ing that leads into the need to trust a teacher, defer­ence to his­tor­i­cal mas­ters, and above all, liv­ing in data poverty while con­tin­u­ing to act as if the lux­ury of trust is pos­si­ble.

  • Schools Pro­lifer­at­ing Without Ev­i­dence: The branch­ing schools of “psy­chother­apy”, an­other do­main in which ex­per­i­men­tal ver­ifi­ca­tion was weak (nonex­is­tent, ac­tu­ally), show that an as­piring craft lives or dies by the de­gree to which it can be tested in the real world. In the ab­sence of that test­ing, one be­comes pres­ti­gious by in­vent­ing yet an­other school and hav­ing stu­dents, rather than ex­cel­ling at any visi­ble perfor­mance crite­rion. The field of he­do­nic psy­chol­ogy (hap­piness stud­ies) be­gan, to some ex­tent, with the re­al­iza­tion that you could mea­sure hap­piness—that there was a fam­ily of mea­sures that by golly did val­i­date well against each other. The act of cre­at­ing a new mea­sure­ment cre­ates new sci­ence; if it’s a good mea­sure­ment, you get good sci­ence.

  • 3 Levels of Ra­tion­al­ity Ver­ifi­ca­tion: How far the craft of ra­tio­nal­ity can be taken, de­pends largely on what meth­ods can be in­vented for ver­ify­ing it. Tests seem use­fully strat­ifi­able into rep­u­ta­tional, ex­per­i­men­tal, and or­ga­ni­za­tional. A “rep­u­ta­tional” test is some real-world prob­lem that tests the abil­ity of a teacher or a school (like run­ning a hedge fund, say) - “keep­ing it real”, but with­out be­ing able to break down ex­actly what was re­spon­si­ble for suc­cess. An “ex­per­i­men­tal” test is one that can be run on each of a hun­dred stu­dents (such as a well-val­i­dated sur­vey). An “or­ga­ni­za­tional” test is one that can be used to pre­serve the in­tegrity of or­ga­ni­za­tions by val­i­dat­ing in­di­vi­d­u­als or small groups, even in the face of strong in­cen­tives to game the test. The strength of solu­tion in­vented at each level will de­ter­mine how far the craft of ra­tio­nal­ity can go in the real world.

  • Why Our Kind Can’t Co­op­er­ate: The athe­ist/​liber­tar­ian/​technophile/​sf-fan/​early-adopter/​pro­gram­mer/​etc crowd, aka “the non­con­formist cluster”, seems to be stun­ningly bad at co­or­di­nat­ing group pro­jects. There are a num­ber of rea­sons for this, but one of them is that peo­ple are as re­luc­tant to speak agree­ment out loud, as they are ea­ger to voice dis­agree­ments—the ex­act op­po­site of the situ­a­tion that ob­tains in more co­he­sive and pow­er­ful com­mu­ni­ties. This is not ra­tio­nal ei­ther! It is dan­ger­ous to be half a ra­tio­nal­ist (in gen­eral), and this also ap­plies to teach­ing only dis­agree­ment but not agree­ment, or only lonely defi­ance but not co­or­di­na­tion. The pseudo-ra­tio­nal­ist taboo against ex­press­ing strong feel­ings prob­a­bly doesn’t help ei­ther.

  • Tol­er­ate Tol­er­ance: One of the likely char­ac­ter­is­tics of some­one who sets out to be a “ra­tio­nal­ist” is a lower-than-usual tol­er­ance for flawed think­ing. This makes it very im­por­tant to tol­er­ate other peo­ple’s tol­er­ance—to avoid re­ject­ing them be­cause they tol­er­ate peo­ple you wouldn’t—since oth­er­wise we must all have ex­actly the same stan­dards of tol­er­ance in or­der to work to­gether, which is un­likely. Even if some­one has a nice word to say about com­plete lu­natics and crack­pots—so long as they don’t liter­ally be­lieve the same ideas them­selves—try to be nice to them? In­tol­er­ance of tol­er­ance cor­re­sponds to pun­ish­ment of non-pun­ish­ers, a very dan­ger­ous game-the­o­retic idiom that can lock com­pletely ar­bi­trary sys­tems in place even when they benefit no one at all.

  • You’re Cal­ling Who A Cult Leader?: Paul Gra­ham gets ex­actly the same ac­cu­sa­tions about “cults” and “echo cham­bers” and “co­ter­ies” that I do, in ex­actly the same tone—e.g. com­par­ing the long hours worked by Y Com­bi­na­tor startup founders to the sleep-de­pri­va­tion tac­tic used in cults, or claiming that founders were asked to move to the Bay Area startup hub as a cult tac­tic of sep­a­ra­tion from friends and fam­ily. This is bizarre, con­sid­er­ing our rel­a­tive sur­face risk fac­tors. It just seems to be a failure mode of the non­con­formist com­mu­nity in gen­eral. By far the most cultish-look­ing be­hav­ior on Hacker News is peo­ple try­ing to show off how will­ing they are to dis­agree with Paul Gra­ham, which, I can per­son­ally tes­tify, feels re­ally bizarre when you’re the tar­get. Ad­miring some­one shouldn’t be so scary—I don’t hold back so much when prais­ing e.g. Dou­glas Hofs­tadter; in this world there are peo­ple who have pul­led off awe­some feats and it is okay to ad­mire them highly.

  • On Things That Are Awe­some: Seven fol­lowup thoughts: I can list more than one thing that is awe­some; when I think of “Dou­glas Hofs­tadter” I am re­ally think­ing of his all-time great­est work; the great­est work is not the per­son; when we imag­ine other peo­ple we are imag­in­ing their out­put, so the real Dou­glas Hofs­tadter is the source of “Dou­glas Hofs­tadter”; I most strongly get the sen­sa­tion of awe­some­ness when I see some­one out­do­ing me over­whelm­ingly, at some task I’ve ac­tu­ally tried; we tend to ad­mire unique de­tailed awe­some things and over­look com­mon non­de­tailed awe­some things; re­li­gion and its bas­tard child “spiritu­al­ity” tends to make us over­look hu­man awe­some­ness.

  • Your Price For Join­ing: The game-the­o­ret­i­cal puz­zle of the Ul­ti­ma­tum game has its re­flec­tion in a real-world dilemma: How much do you de­mand that an ex­ist­ing group ad­just to­ward you, be­fore you will ad­just to­ward it? Our hunter-gath­erer in­stincts will be tuned to groups of 40 with very min­i­mal ad­minis­tra­tive de­mands and equal par­ti­ci­pa­tion, mean­ing that we un­der­es­ti­mate the in­er­tia of larger and more spe­cial­ized groups and de­mand too much be­fore join­ing them. In other groups this re­sis­tance can be over­come by af­fec­tive death spirals and con­for­mity, but ra­tio­nal­ists think them­selves too good for this—with the re­sult that peo­ple in the non­con­formist cluster of­ten set their join­ing prices way way way too high, like an 50-way split with each player de­mand­ing 20% of the money. Non­con­formists need to move in the di­rec­tion of join­ing groups more eas­ily, even in the face of an­noy­ances and ap­par­ent un­re­spon­sive­ness. If an is­sue isn’t worth per­son­ally fix­ing by how­ever much effort it takes, it’s not worth a re­fusal to con­tribute.

  • Can Hu­man­ism Match Reli­gion’s Out­put?: Any­one with a sim­ple and ob­vi­ous char­i­ta­ble pro­ject—re­spond­ing with food and shelter to a tidal wave in Thailand, say—would be bet­ter off by far plead­ing with the Pope to mo­bi­lize the Catholics, rather than with Richard Dawk­ins to mo­bi­lize the athe­ists. For so long as this is true, any in­crease in athe­ism at the ex­pense of Catholi­cism will be some­thing of a hol­low vic­tory, re­gard­less of all other benefits. Can no ra­tio­nal­ist match the mo­ti­va­tion that comes from the ir­ra­tional fear of Hell? Or does the real story have more to do with the mo­ti­vat­ing power of phys­i­cally meet­ing oth­ers who share your cause, and group norms of par­ti­ci­pat­ing?

  • Church vs. Task­force: Churches serve a role of pro­vid­ing com­mu­nity—but they aren’t ex­plic­itly op­ti­mized for this, be­cause their nom­i­nal role is differ­ent. If we de­sire com­mu­nity with­out church, can we go one bet­ter in the course of delet­ing re­li­gion? There’s a great deal of work to be done in the world; ra­tio­nal­ist com­mu­ni­ties might po­ten­tially or­ga­nize them­selves around good causes, while ex­plic­itly op­ti­miz­ing for com­mu­nity.

  • Ra­tion­al­ity: Com­mon In­ter­est of Many Causes: Many causes benefit par­tic­u­larly from the spread of ra­tio­nal­ity—be­cause it takes a lit­tle more ra­tio­nal­ity than usual to see their case, as a sup­porter, or even just a sup­port­ive by­stan­der. Not just the ob­vi­ous causes like athe­ism, but things like mar­ijuana le­gal­iza­tion. In the case of my own work this effect was strong enough that af­ter years of bog­ging down I threw up my hands and ex­plic­itly re­cursed on cre­at­ing ra­tio­nal­ists. If such causes can come to terms with not in­di­vi­d­u­ally cap­tur­ing all the ra­tio­nal­ists they cre­ate, then they can mu­tu­ally benefit from mu­tual effort on cre­at­ing ra­tio­nal­ists. This co­op­er­a­tion may re­quire learn­ing to shut up about dis­agree­ments be­tween such causes, and not fight over pri­ori­ties, ex­cept in spe­cial­ized venues clearly marked.

  • Hel­pless In­di­vi­d­u­als: When you con­sider that our group­ing in­stincts are op­ti­mized for 50-per­son hunter-gath­erer bands where ev­ery­one knows ev­ery­one else, it be­gins to seem mirac­u­lous that mod­ern-day large in­sti­tu­tions sur­vive at all. And in fact, the vast ma­jor­ity of large mod­ern-day in­sti­tu­tions sim­ply fail to ex­ist in the first place. This is why fund­ing of Science is largely through money thrown at Science rather than dona­tions from in­di­vi­d­u­als—re­search isn’t a good emo­tional fit for the rare prob­lems that in­di­vi­d­u­als can man­age to co­or­di­nate on. In fact very few things are, which is why e.g. 200 mil­lion adult Amer­i­cans have such tremen­dous trou­ble su­per­vis­ing the 535 mem­bers of Congress. Modern hu­man­ity man­ages to put forth very lit­tle in the way of co­or­di­nated in­di­vi­d­ual effort to serve our col­lec­tive in­di­vi­d­ual in­ter­ests.

  • Money: The Unit of Car­ing: Omo­hun­dro’s re­source bal­ance prin­ci­ple im­plies that the in­side of any ap­prox­i­mately ra­tio­nal sys­tem has a com­mon cur­rency of ex­pected utilons. In our world, this com­mon cur­rency is called “money” and it is the unit of how much so­ciety cares about some­thing—a bru­tal yet ob­vi­ous point. Many peo­ple, see­ing a good cause, would pre­fer to help it by donat­ing a few vol­un­teer hours. But this avoids the tremen­dous gains of com­par­a­tive ad­van­tage, pro­fes­sional spe­cial­iza­tion, and economies of scale—the rea­son we’re not still in caves, the only way any­thing ever gets done in this world, the tools grownups use when any­one re­ally cares. Donat­ing hours worked within a pro­fes­sional spe­cialty and pay­ing-cus­tomer pri­or­ity, whether di­rectly, or by donat­ing the money earned to hire other pro­fes­sional spe­cial­ists, is far more effec­tive than vol­un­teer­ing un­skil­led hours.

  • Pur­chase Fuzzies and Utilons Separately: Wealthy philan­thropists typ­i­cally make the mis­take of try­ing to pur­chase warm fuzzy feel­ings, sta­tus among friends, and ac­tual util­i­tar­ian gains, si­mul­ta­neously; this re­sults in vague pushes along all three di­men­sions and a mediocre fi­nal re­sult. It should be far more effec­tive to spend some money/​effort on buy­ing al­tru­is­tic fuzzies at max­i­mum op­ti­mized effi­ciency (e.g. by helping peo­ple in per­son and see­ing the re­sults in per­son), buy­ing sta­tus at max­i­mum effi­ciency (e.g. by donat­ing to some­thing sexy that you can brag about, re­gard­less of effec­tive­ness), and spend­ing most of your money on ex­pected utilons (cho­sen through sheer cold-blooded shut-up-and-mul­ti­ply calcu­la­tion, with­out wor­ry­ing about sta­tus or fuzzies).

  • Select­ing Ra­tion­al­ist Groups: Try­ing to breed e.g. egg-lay­ing chick­ens by in­di­vi­d­ual se­lec­tion can pro­duce odd side effects on the farm level, since a more dom­i­nant hen can pro­duce more egg mass at the ex­pense of other hens. Group se­lec­tion is nearly im­pos­si­ble in Na­ture, but easy to im­pose in the lab­o­ra­tory, and group-se­lect­ing hens pro­duced sub­stan­tial in­creases in effi­ciency. Though most of my es­says are about in­di­vi­d­ual ra­tio­nal­ity—and in­deed, Tra­di­tional Ra­tion­al­ity also praises the lone heretic more than evil Author­ity—the real effec­tive­ness of “ra­tio­nal­ists” may end up de­ter­mined by their perfor­mance in groups.

  • In­cre­men­tal Progress and the Valley: The op­ti­mal­ity the­o­rems for prob­a­bil­ity the­ory and de­ci­sion the­ory, are for perfect prob­a­bil­ity the­ory and de­ci­sion the­ory. There is no the­o­rem that in­cre­men­tal changes to­ward the ideal, start­ing from a flawed ini­tial form, must yield in­cre­men­tal progress at each step along the way. Since perfec­tion is unattain­able, why dare to try for im­prove­ment? But my limited ex­pe­rience with spe­cial­ized ap­pli­ca­tions sug­gests that given enough progress, one can achieve huge im­prove­ments over baseline—it just takes a lot of progress to get there.

  • Whin­ing-Based Com­mu­ni­ties: Many com­mu­ni­ties feed emo­tional needs by offer­ing their mem­bers some­one or some­thing to blame for failure—say, those loot­ers who don’t ap­prove of your ex­cel­lence. You can eas­ily imag­ine some group of “ra­tio­nal­ists” con­grat­u­lat­ing them­selves on how rea­son­able they were, while blam­ing the sur­round­ing un­rea­son­able so­ciety for keep­ing them down. But this is not how real ra­tio­nal­ity works—there’s no as­sump­tion that other agents are ra­tio­nal. We all face un­fair tests (and yes, they are un­fair to differ­ent de­grees for differ­ent peo­ple); and how well you do with your un­fair tests, is the test of your ex­is­tence. Ra­tion­al­ity is there to help you win any­way, not to provide a self-hand­i­cap­ping ex­cuse for los­ing. There are no first-per­son ex­ten­u­at­ing cir­cum­stances. There is ab­solutely no point in go­ing down the road of mu­tual bit­ter­ness and con­so­la­tion, about any­thing, ever.

  • Manda­tory Se­cret Iden­tities: This post was not well-re­ceived, but the point was to sug­gest that a stu­dent must at some point leave the dojo and test their skills in the real world. The as­pira­tion of an ex­cel­lent stu­dent should not con­sist pri­mar­ily of found­ing their own dojo and hav­ing their own stu­dents.

  • Be­ware of Other-Op­ti­miz­ing: Aspiring ra­tio­nal­ists of­ten vastly over­es­ti­mate their own abil­ity to op­ti­mize other peo­ple’s lives. They read nine­teen web­pages offer­ing pro­duc­tivity ad­vice that doesn’t work for them… and then en­counter the twen­tieth page, or in­vent a new method them­selves, and wow, it re­ally works—they’ve dis­cov­ered the true method. Ac­tu­ally, they’ve just dis­cov­ered the one method in twenty that works for them, and their con­fi­dent ad­vice is no bet­ter than ran­domly se­lect­ing one of the twenty blog posts. Other-Op­ti­miz­ing is ex­cep­tion­ally dan­ger­ous when you have power over the other per­son—for then you’ll just be­lieve that they aren’t try­ing hard enough.

  • Akra­sia and Shangri-La: The Shangri-La diet works amaz­ingly well for some peo­ple, but com­pletely fails for oth­ers, for no known rea­son. Since the diet has a metabolic ra­tio­nale and is not sup­posed to re­quire willpower, its failure in my and other cases is un­am­bi­giously mys­te­ri­ous. If it re­quired a com­po­nent of willpower, then I and oth­ers might be tempted to blame my­self for not hav­ing willpower. The art of com­bat­ing akra­sia (willpower failure) has the same sort of mys­ter­ies and is in the same prim­i­tive state; we don’t know the deeper rule that ex­plains why a trick works for one per­son but not an­other.

  • Col­lec­tive Apa­thy and the In­ter­net: The causes of by­stan­der ap­a­thy are even worse on the In­ter­net. There may be an op­por­tu­nity here for a startup to de­liber­ately try to avert by­stan­der ap­a­thy in on­line group co­or­di­na­tion.

  • Bayesi­ans vs. Bar­bar­ians: Sup­pose that a coun­try of ra­tio­nal­ists is at­tacked by a coun­try of Evil Bar­bar­ians who know noth­ing of prob­a­bil­ity the­ory or de­ci­sion the­ory. There’s a cer­tain con­cept of “ra­tio­nal­ity” which says that the ra­tio­nal­ists in­evitably lose, be­cause the Bar­bar­ians be­lieve in a heav­enly af­ter­life if they die in bat­tle, while the ra­tio­nal­ists would all in­di­vi­d­u­ally pre­fer to stay out of harm’s way. So the ra­tio­nal­ist civ­i­liza­tion is doomed; it is too el­e­gant and civ­i­lized to fight the sav­age Bar­bar­ians… And then there’s the idea that ra­tio­nal­ists should be able to (a) solve group co­or­di­na­tion prob­lems, (b) care a lot about other peo­ple and (c) win...

  • Of Gen­der and Ra­tion­al­ity: Anal­y­sis of the gen­der im­bal­ance that ap­pears in “ra­tio­nal­ist” com­mu­ni­ties, sug­gest­ing nine pos­si­ble causes of the effect, and pos­si­ble cor­re­spond­ing solu­tions.

  • My Way: I some­times think of my­self as be­ing like the pro­tag­o­nist in a clas­sic SF labyrinth story, wan­der­ing fur­ther and fur­ther into some alien ar­ti­fact, try­ing to ra­dio back a de­scrip­tion of what I’m see­ing, so that I can be fol­lowed. But what I’m find­ing is not just the Way, the thing that lies at the cen­ter of the labyrinth; it is also my Way, the path that I would take to come closer to the cen­ter, from what­ever place I started out. And yet there is still a com­mon thing we are all try­ing to find. We should be aware that oth­ers’ short­est paths may not be the same as our own, but this is not the same as giv­ing up the abil­ity to judge or to share.

  • The Sin of Un­der­con­fi­dence: When sub­jects know about a bias or are warned about a bias, over­cor­rec­tion is not un­heard of as an ex­per­i­men­tal re­sult. That’s what makes a lot of cog­ni­tive sub­tasks so trou­ble­some—you know you’re bi­ased but you’re not sure how much, and if you keep tweak­ing you may over­cor­rect. The dan­ger of un­der­con­fi­dence (over­cor­rect­ing for over­con­fi­dence) is that you pass up op­por­tu­ni­ties on which you could have been suc­cess­ful; not challeng­ing difficult enough prob­lems; los­ing for­ward mo­men­tum and adopt­ing defen­sive pos­tures; re­fus­ing to put the hy­poth­e­sis of your in­abil­ity to the test; los­ing enough hope of triumph to try hard enough to win. You should ask your­self “Does this way of think­ing make me stronger, or weaker?”

  • Well-Kept Gar­dens Die By Paci­fism: Good on­line com­mu­ni­ties die pri­mar­ily by re­fus­ing to defend them­selves, and so it has been since the days of Eter­nal Septem­ber. Any­one ac­cul­turated by academia knows that cen­sor­ship is a very grave sin… in their walled gar­dens where it costs thou­sands and thou­sands of dol­lars to en­ter. A com­mu­nity with in­ter­nal poli­tics will treat any at­tempt to im­pose mod­er­a­tion as a coup at­tempt (since in­ter­nal poli­tics seem of far greater im­port than in­vad­ing bar­bar­ians). In ra­tio­nal­ist com­mu­ni­ties this is prob­a­bly an in­stance of un­der­con­fi­dence—mildly com­pe­tent mod­er­a­tors are prob­a­bly quite trust­wor­thy to wield the ban­ham­mer. On Less Wrong, the com­mu­nity is the mod­er­a­tor (via karma) and you will need to trust your­selves enough to wield the power and keep the gar­den clear.

  • Prac­ti­cal Ad­vice Backed By Deep The­o­ries: Prac­ti­cal ad­vice is gen­uinely much, much more use­ful when it’s backed up by con­crete ex­per­i­men­tal re­sults, causal mod­els that are ac­tu­ally true, or valid math that is val­idly in­ter­preted. (Listed in in­creas­ing or­der of difficulty.) Strip­ping out the the­o­ries and giv­ing the mere ad­vice alone wouldn’t have nearly the same im­pact or even the same mes­sage; and oddly enough, trans­lat­ing ex­per­i­ments and math into prac­ti­cal ad­vice seems to be a rare niche ac­tivity rel­a­tive to academia. If there’s a dis­tinc­tive LW style, this is it.

  • Less Meta: The fact that this fi­nal se­ries was on the craft and the com­mu­nity seems to have de­liv­ered a push in some­thing of the wrong di­rec­tion, (a) steer­ing to­ward con­ver­sa­tion about con­ver­sa­tion and (b) mak­ing pre­sent ac­com­plish­ment pale in the light of grander dreams. Time to go back to prac­ti­cal ad­vice and deep the­o­ries, then.

  • Go Forth and Create the Art!: I’ve de­vel­oped pri­mar­ily the art of epistemic ra­tio­nal­ity, in par­tic­u­lar, the arts re­quired for ad­vanced cog­ni­tive re­duc­tion­ism… arts like dis­t­in­guish­ing fake ex­pla­na­tions from real ones and avoid­ing af­fec­tive death spirals. There is much else that needs de­vel­op­ing to cre­ate a craft of ra­tio­nal­ity—fight­ing akra­sia; co­or­di­nat­ing groups; teach­ing, train­ing, ver­ifi­ca­tion, and be­com­ing a proper ex­per­i­men­tal sci­ence; de­vel­op­ing bet­ter in­tro­duc­tory liter­a­ture… And yet it seems to me that there is a be­gin­ning bar­rier to sur­pass be­fore you can start cre­at­ing high-qual­ity craft of ra­tio­nal­ity, hav­ing to do with vir­tu­ally ev­ery­one who tries to think lofty thoughts go­ing in­stantly astray, or in­deed even re­al­iz­ing that a craft of ra­tio­nal­ity ex­ists and that you ought to be study­ing cog­ni­tive sci­ence liter­a­ture to cre­ate it. It’s my hope that my writ­ings, as par­tial as they are, will serve to sur­pass this ini­tial bar­rier. The rest I leave to you.