# [Question] What are good rationality exercises?

I want to know what are good ra­tio­nal­ity ex­er­cises.

I was just on a call with Liron and PhilH, hang­ing out af­ter the weekly LessWrong week­end event, and we dis­cussed ex­er­cises that could hap­pen on LessWrong.

Here is the list we gen­er­ated:

• Think­ing Physics

• Fermi Estimates

• Pro­ject Euler

• Cal­ibra­tion Training

• Ba­sic prob­a­bil­is­tic reasoning

• Ba­sic have-you-read-the-se­quences knowl­edge test (e.g. “Which of the fol­low­ing is an ex­am­ple of ‘be­lief as at­tire’?”)

Another user on the call (whose name I for­get) sug­gested it could be fun to have a daily Fermi Es­ti­mate on LessWrong, where ev­ery­one sub­mits their num­ber and the model they used to reach the num­ber. I think this would be quite ex­cit­ing.

Please write an­swers with other ex­er­cises that you think are or might be great for ra­tio­nal­ity train­ing, some ex­pla­na­tion of why you think it could be good, and a sug­ges­tion of how it could be in­cor­po­rated into LessWrong. I’ll prob­a­bly add some of the above my­self.

• Things that in­ter­est me:

• Let’s go ex­plor­ing. Eliezer took a pretty low-bar ac­tivity (fan fic) and cre­ated some­thing origi­nal (HPMOR). Why don’t we pick some no­to­ri­ous ar­eas of the in­ter­net where we think a lit­tle LW-style over­think­ing could go a long way?

• A ra­tio­nal ap­proach to cul­ti­vat­ing imag­i­na­tion, cre­ativity, and med­i­ta­tion. We have so many tools here for mod­el­ing ques­tions of fact. Can’t ra­tio­nal­ity help us de­velop the right side of the brain as well as the left?

• Busi­ness ideas we could col­lab­o­rate on, that hinge pri­mar­ily on ra­tio­nal think­ing, learn­ing how to learn, and con­scien­tious­ness.

I would not par­ti­ci­pate in ac­tivi­ties that boil down to ar­bi­trary left-brain prob­lem solv­ing.

• “Do­ing im­pos­si­ble things”

• Get 100 strangers to show up at a spe­cific place at a spe­cific time.

• Make \$5,000 coun­ter­fac­tual dol­lars in a week­end.

• Be fea­tured in a ma­jor print pub­li­ca­tion in less than a month.

• etc.

• An­swer: Check My Understanding

Here’s how it’d work. Sup­pose I want to im­prove my un­der­stand­ing of Au­mann’s Agree­ment The­o­rem. I would write up my thoughts, do­ing my best to ex­plain what I know about it. Then other peo­ple would com­ment on what I’m miss­ing and where I went wrong.

This seems use­ful for a few differ­ent rea­sons:

• As an au­thor, the com­ments provide you with per­son­al­ized feed­back and al­low you to “fill in the gaps”.

• As an au­thor, the act of do­ing the ini­tial write-up seems like it’d be very benefi­cial. Ditto for read­ers writ­ing out their com­ments. (I have the Feyn­man Tech­nique in mind.)

• As a reader, you may have a de­cent un­der­stand­ing of Au­mann’s Agree­ment The­o­rem, but see­ing it ex­plained by a differ­ent au­thor might help some things “click” for you (I have Non-Ex­pert Ex­pla­na­tion in mind).

• An­swer: Writ­ing Your Hy­po­thet­i­cal Apostasy

See Write Your Hy­po­thet­i­cal Apostasy on Over­com­ing Bias.

Imag­ine, if you will, that the world’s de­struc­tion is at stake and the only way to save it is for you to write a one-pager that con­vinces a jury that your old cher­ished view is mis­taken or at least se­ri­ously in­com­plete. The more in­ad­e­quate the jury thinks your old cher­ished view is, the greater the chances that the world is saved. The catch is that the jury con­sists of ear­lier stages of your­self (such as your­self such as you were one year ago). More­over, the jury be­lieves that you have been bribed to write your apos­tasy; so any as­surances of the form “trust me, I am older and know bet­ter” will be in­effec­tive. Your only hope of sav­ing the world is by writ­ing an apos­tasy that will make the jury rec­og­nize how flawed/​par­tial/​shal­low/​ju­ve­nile/​crude/​ir­re­spon­si­ble/​in­com­plete and gen­er­ally in­ad­e­quate your old cher­ished view is.

I’m not sure ex­actly how this fits in to group ra­tio­nal­ity prac­tice. I per­son­ally am always more mo­ti­vated to write when it’s some­thing that I will pub­lish, so hav­ing a place where we pub­lish hy­po­thet­i­cal apos­ta­cys could be use­ful for mo­ti­va­tional rea­sons. It would also be use­ful be­cause you’d get feed­back on your thought pro­cess, al­though that point could be made for many other ex­er­cises.

• Oh yeah, this one’s great. Thanks for re­mind­ing me.

• I was think­ing that if the se­quences and other LW clas­sics were a high school class, we could make some­thing like an SAT sub­ject test to check un­der­stand­ing/​fluency in the sub­ject, then that could be a badge on the site and po­ten­tially a good cre­den­tial to have in your ca­reer.

The kinds of ques­tions could be like:

1.

If a US cit­i­zen has a le­gal way to save \$500/​year on their taxes, but it re­quires spend­ing 1 hour/​day filling out bor­ing pa­per­work on 5 days of ev­ery week, should they do it?

a. Vir­tu­ally ev­ery­one should do it

b. A sig­nifi­cant frac­tion (10-90%) of the pop­u­la­tion should do it

c. Vir­tu­ally no one should do it

2.

With suffi­cient ev­i­dence and a ra­tio­nal de­liber­a­tion pro­cess, is it pos­si­ble to be­come sure that the Loch Ness Mon­ster does/​doesn’t ex­ist?

a. We CAN po­ten­tially be­come sure ei­ther way

b. We CAN’T po­ten­tially be­come sure ei­ther way

c. We can only po­ten­tially be­come sure that it DOES exist

d. We can only po­ten­tially be­come sure that it DOESN’T exist

• I re­call read­ing ed­u­ca­tional psych stuff about how the act of both 1) cre­at­ing and 2) an­swer­ing ques­tions like this is a great way to deepen your un­der­stand­ing.

See the Up­dates Thread. Ba­si­cally, tak­ing note of the be­lief up­dates you perform and dis­cussing why you performed them. What did you pre­vi­ously be­lieve, what do you cur­rently be­lieve, and why did the data you ob­served move you from there to here?

• An­swer: Bet­ting With Real Money

From the end of Inad­e­quate Equil­ibria:

I don’t have good, re­peat­able ex­er­cises for train­ing your skill in this field, and that’s one rea­son I worry about the re­sults. But I can tell you this much: bet on ev­ery­thing. Bet on ev­ery­thing where you can or will find out the an­swer. Even if you’re only test­ing your­self against one other per­son, it’s away of cal­ibrat­ing your­self to avoid both over­con­fi­dence and un­der­con­fi­dence, which will serve you in good stead emo­tion­ally when you try to do in­ad­e­quacy rea­son­ing. Or so I hope.

Eliezer seems to be refer­ring to real money here. And I re­call him talk­ing el­se­where about how it is use­ful to put real money on the line.

This meshes with my ex­pe­riences play­ing poker. It’s one thing to study and learn that X is a mis­take. It’s an­other thing to make the mis­take of X and lose a big pot be­cause of it. There’s some­thing about los­ing real money that ce­ments it in your head. And I’m not just refer­ring to my own ex­pe­riences. From talk­ing to other poker play­ers, it seems that this is the norm.

How­ever, real money is a touchy sub­ject and I’m not sure how we would ac­tu­ally pull this off. But I figure that there is still value in bring­ing it up.

• Bet­ting with real money is definitely a use­ful way of prob­ing at your own con­fi­dence (I don’t do it much at all due to gen­eral un­der­con­fi­dence, but it’s sure helped me nail down the feel­ing of be­ing re­ally sure of some­thing), and a lot of my ra­tio­nal­ist friends do it on a hand­shake-agree­ment ba­sis. How­ever, any way of for­mal­iz­ing this would turn LW (or what­ever in­sti­tu­tion) into a gam­bling site, which is ille­gal :/​

• There may be some cre­ative non-for­mal solu­tions though.

• On one end of the spec­trum you could have a to­ken sys­tem and leave it up to the users to figure out ac­tu­ally ex­chang­ing money them­selves (a lot of poker apps do this).

• Get­ting less hands-on, you could do away with the to­kens and just act as a match­maker, get­ting two par­ties who want to make a bet in touch with each other and they could han­dle it from there.

• Get­ting even less hands-on, you could just func­tion as a place to dis­cuss bets you may want to make in the real world. Eg. sports bet­ting or stock pick­ing (I guess there’s not too many ex­am­ples of this).

• There could be ways of mak­ing it le­gal given that we’re a non-profit with some­what aca­demic in­ter­ests. (By “mak­ing” I mean ac­tu­ally chang­ing the law or get­ting a No-Ac­tion Let­ter.) Most peo­ple who do gam­bling on­line do it for profit, which is where things get tricky.

• Mak­ing bets is good ex­er­cise too. If you can’t find other peo­ple to bet with you can also make pub­lic pre­dic­tions.

• An­swer: Fermi Estimates

Fermi es­ti­mates are at­tempts to an­swer a quan­ti­ta­tive ques­tion us­ing or­der-of-mag­ni­tude style rea­son­ing. Th­ese are ques­tions like “How many peo­ple fly on air­planes each day?” or “How many atoms are in my arm?”. In con­trast to things like cal­ibra­tion prac­tice, these are much more gen­er­a­tive, at­tempt­ing to tie to­gether parts of your world model to come up with a model that an­swers a ques­tion.

On LessWrong, this could be prac­ti­cally im­ple­mented by hav­ing a set of 100-1000 ques­tions that users can do ei­ther in a week­end blitz, or spaced out over time. A user who got 100 cor­rect (within a fac­tor of 2x) could have a sign on their pro­file in­di­cat­ing that they com­pleted this task. It could also be im­ple­mented as a daily/​weekly ques­tion for users to an­swer and then com­pare notes on.

• When I first read the se­quences, I thought “What do I know and how do I think I know it?” was pretty ba­nal and use­less—didn’t ev­ery­one know that? Philos­o­phy 101, ques­tion your be­liefs, look for hid­den as­sump­tions, etc.

The older I get the more I come to think that no, not ev­ery­one knows this, and even the peo­ple who know it don’t prac­tice it enough. I’m not sure though.

• I think of “What do I know and how do I think I know it?” as the “root cause” of es­sen­tially all other epistemic ra­tio­nal­ity—i.e. if you’re suffi­ciently good at that one skill, all the oth­ers will fol­low nat­u­rally from it. Con­versely, that sug­gests it’s re­ally difficult to get re­ally good at it: if I’m miss­ing any other epistemic ra­tio­nal­ity skill, it means I’m not good enough at “What do I know and how do I think I know it?”.

I’d say the “ob­vi­ous” ver­sion of the skill in­volves ac­tivi­ties which look like ques­tion­ing be­liefs, look­ing for hid­den as­sump­tions, etc. But these are sur­face-level ac­tivi­ties which don’t nec­es­sar­ily trace the whole be­lief-gen­er­at­ing pipeline. The full skill is about mod­el­ling the en­tire phys­i­cal pro­cess which cre­ated your map from the ter­ri­tory.

One ex­am­ple I’ve thought about re­cently: we’ve had a bunch of posts lately on simu­lacrum lev­els. Per­son­ally, I saw most of the ideas in those posts as kind-of-ob­vi­ous ap­pli­ca­tions of the gen­eral prin­ci­ple/​habit “when you hear words, don’t ask what they liter­ally sig­nify, ask what phys­i­cal pro­cess gen­er­ated them and what that im­plies about the world”. (Or the HPMOR ver­sion: “Pro­fes­sor Quir­rell didn’t care what your ex­pres­sion looked like, he cared which states of mind made it likely.”) This is a prin­ci­ple/​habit which nat­u­rally pops out of mod­el­ling the phys­i­cal pro­cess which pro­duces your own be­liefs, when­ever some­one’s words ap­pear in the be­lief-pro­duc­tion pipeline.

• The CFAR Hand­book has a lot of good ones.

• Ac­cord­ing to a vague feel­ing of a cou­ple of peo­ple I know, the CFAR hand­book is tricky enough that read­ing it with­out do­ing CFAR could be dan­ger­ous.

• It seems very plau­si­ble that you’d get more value out of them af­ter hav­ing gone through CFAR. But it seems im­plau­si­ble that you’d get zero or nega­tive value out of them with­out hav­ing gone through CFAR. At least in terms of ex­pected value.

• Nah, I don’t think that’s a real con­cern. Or at least I re­ally don’t see much dan­ger in the things in there, and have worked a lot with it in the past.

• This “in­cor­po­rated into LW” con­di­tion is a tight leash; and it re­minds me of why I don’t usu­ally… recom­mend LW to my friends.

Some mat­ters are too per­sonal to talk about on the In­ter­net. Like mar­i­tal in­fidelity, which 1) is some­thing out­side of many peo­ple’s ex­pe­riences, 2) definitely seems to re­quire tons of in­stru­men­tal ra­tio­nal­ity even on the best of days, 3) has (eth­i­cal) im­pli­ca­tions which real peo­ple of­ten don’t take into ac­count de­spite other real peo­ple of­ten ex­pect­ing them to (but know­ing they won’t), and 4) un­like ac­cept­able LW ma­te­rial with which it shares the above char­ac­ter­is­tics, it hurts. And so it is with some other things that ac­tual adults have to deal with.

Un­less you speak about some­thing already in the past. Maybe we should have a Ceme­tery of Failed Things in our City. (Our cur­rent Ceme­tery of Failed Things holds sev­eral star­tups and per­sonal habits, which is, wow, how lucky we are.)

• Ba­sic have-you-read-the-se­quences knowl­edge test (e.g. “Which of the fol­low­ing is an ex­am­ple of ‘be­lief as at­tire’?”)

This might be com­bined with cal­ibra­tion train­ing.