# Get Curious

Be­ing lev­els above in [ra­tio­nal­ity] means do­ing ra­tio­nal­ist prac­tice 101 much bet­ter than oth­ers [just like] be­ing a few lev­els above in fight­ing means ex­e­cut­ing a ba­sic front-kick much bet­ter than oth­ers.

I fear not the man who has prac­ticed 10,000 kicks once, but I fear the man who has prac­ticed one kick 10,000 times.

- Bruce Lee

Re­cently, when Eliezer wanted to ex­plain why he thought Anna Sala­mon was among the best ra­tio­nal­ists he knew, he picked out one fea­ture of Anna’s be­hav­ior in par­tic­u­lar:

I see you start to an­swer a ques­tion, and then you stop, and I see you get cu­ri­ous.

For me, the abil­ity to re­li­ably get cu­ri­ous is the ba­sic front-kick of epistemic ra­tio­nal­ity. The best ra­tio­nal­ists I know are not nec­es­sar­ily those who know the finer points of cog­ni­tive psy­chol­ogy, Bayesian statis­tics, and Solomonoff In­duc­tion. The best ra­tio­nal­ists I know are those who can re­li­ably get cu­ri­ous.

Once, I ex­plained the Cog­ni­tive Reflec­tion Test to Riley Crane by say­ing it was made of ques­tions that tempt your in­tu­itions to quickly give a wrong an­swer. For ex­am­ple:

A bat and a ball cost \$1.10 in to­tal. The bat costs \$1.00 more than the ball. How much does the ball cost?

If you haven’t seen this ques­tion be­fore and you’re like most peo­ple, your brain screams “10 cents!” But el­e­men­tary alge­bra shows that can’t be right. The cor­rect an­swer is 5 cents. To get the right an­swer, I ex­plained, you need to in­ter­rupt your in­tu­itive judg­ment and think “No! Alge­bra.”

A lot of ra­tio­nal­ist prac­tice is like that. Whether think­ing about physics or so­ciol­ogy or re­la­tion­ships, you need to catch your in­tu­itive judg­ment and think “No! Cu­ri­os­ity.

Most of us know how to do alge­bra. How does one “do” cu­ri­os­ity?

Below, I pro­pose a pro­cess for how to “get cu­ri­ous.” I think we are only just be­gin­ning to learn how to cre­ate cu­ri­ous peo­ple, so please don’t take this method as Science or Gospel but in­stead as an at­tempt to Just Try It.

As with my al­gorithm for beat­ing pro­cras­ti­na­tion, you’ll want to prac­tice each step of the pro­cess in ad­vance so that when you want to get cu­ri­ous, you’re well-prac­ticed on each step already. With enough prac­tice, these steps may even be­come habits.

### Step 1: Feel that you don’t already know the an­swer.

If you have be­liefs about the mat­ter already, push the “re­set” but­ton and erase that part of your map. You must feel that you don’t already know the an­swer.

Ex­er­cise 1.1: Im­port the feel­ing of un­cer­tainty.

1. Think of a ques­tion you clearly don’t know the an­swer to. When will AI be cre­ated? Is my cur­rent diet limit­ing my cog­ni­tive abil­ities? Is it harder to be­come the Prime Minister of Bri­tain or the Pres­i­dent of France?

2. Close your eyes and pay at­ten­tion to how that blank spot on your map feels. (To me, it feels like I can see a silhou­ette of some­one in the dark­ness ahead, but I wouldn’t take bets on who it is, and I ex­pect to be sur­prised by their iden­tity when I get close enough to see them.)

3. Hang on to that feel­ing or image of un­cer­tainty and think about the thing you’re try­ing to get cu­ri­ous about. If your old cer­tainty creeps back, switch to think­ing about who com­posed the Voyn­ich manuscript again, then im­port that feel­ing of un­cer­tainty into the thing you’re try­ing to get cu­ri­ous about, again.

Ex­er­cise 1.2: Con­sider all the things you’ve been con­fi­dent but wrong about.

1. Think of things you once be­lieved but were wrong about. The more similar those be­liefs are to the be­liefs you’re now con­sid­er­ing, the bet­ter.

2. Med­i­tate on the fre­quency of your er­rors, and on the depths of your bi­ases (if you know enough cog­ni­tive psy­chol­ogy).

### Step 2: Want to know the an­swer.

Now, you must want to fill in this blank part of your map.

You mustn’t wish it to re­main blank due to ap­a­thy or fear. Don’t avoid get­ting the an­swer be­cause you might learn you should eat less pizza and more half-sticks of but­ter. Cu­ri­os­ity seeks to an­nihilate it­self.

You also mustn’t let your de­sire that your in­quiry have a cer­tain an­swer block you from dis­cov­er­ing how the world ac­tu­ally is. You must want your map to re­sem­ble the ter­ri­tory, what­ever the ter­ri­tory looks like. This en­ables you to change things more effec­tively than if you falsely be­lieved that the world was already the way you want it to be.

Ex­er­cise 2.1: Vi­su­al­ize the con­se­quences of be­ing wrong.

1. Gen­er­ate hy­pothe­ses about the ways the world may be. Maybe you should eat less gluten and more veg­eta­bles? Maybe a high-pro­tein diet plus some nootrop­ics would boost your IQ 5 points? Maybe your diet is fairly op­ti­mal for cog­ni­tive func­tion already?

2. Next, vi­su­al­ize the con­se­quences of be­ing wrong, in­clud­ing the con­se­quences of re­main­ing ig­no­rant. Vi­su­al­ize the con­se­quences of perform­ing 10 IQ points be­low your po­ten­tial be­cause you were too lazy to in­ves­ti­gate, or be­cause you were strongly mo­ti­vated to jus­tify your prefer­ence for a par­tic­u­lar the­ory of nu­tri­tion. Vi­su­al­ize the con­se­quences of screw­ing up your neu­rol­ogy by tak­ing nootrop­ics you feel ex­cited about but that of­ten cause harm to peo­ple with cog­ni­tive ar­chi­tec­tures similar to your own.

Ex­er­cise 2.2: Make plans for differ­ent wor­lds.

1. Gen­er­ate hy­pothe­ses about the way the world could be — differ­ent wor­lds you might be liv­ing in. Maybe you live in a world where you’d im­prove your cog­ni­tive func­tion by tak­ing nootrop­ics, or maybe you live in a world where the nootrop­ics would harm you.

2. Make plans for what you’ll do if you hap­pen to live in World #1, what you’ll do if you hap­pen to live in World #2, etc. (For un­pleas­ant pos­si­ble wor­lds, this also gives you an op­por­tu­nity to leave a line of re­treat for your­self.)

3. No­tice that these plans are differ­ent. This should pro­duce in you some cu­ri­os­ity about which world you ac­tu­ally live in, so that you can make plans ap­pro­pri­ate for the world you do live in rather than for one of the wor­lds you don’t live in.

Ex­er­cise 2.3: Re­cite the Li­tany of Tarski.

The Li­tany of Tarski can be adapted to any ques­tion. If you’re con­sid­er­ing whether the sky is blue, the Li­tany of Tarski is:

If the sky is blue
I de­sire to be­lieve the sky is blue.
If the sky is not blue,
I de­sire not to be­lieve the sky is blue.

Ex­er­cise 2.4: Re­cite the Li­tany of Gendlin.

The Li­tany of Gendlin re­minds us:

What is true is already so.
Own­ing up to it doesn’t make it worse.
doesn’t make it go away.
And be­cause it’s true,
it is what is there to be in­ter­acted with.
Any­thing un­true isn’t there to be lived.
Peo­ple can stand what is true,
for they are already en­dur­ing it.

### Step 3: Sprint head­long into re­al­ity.

If you’ve made your­self un­cer­tain and then cu­ri­ous, you’re now in a po­si­tion to use ar­gu­ment, em­piri­cism, and schol­ar­ship to sprint head­long into re­al­ity. This part prob­a­bly re­quires some do­main-rele­vant knowl­edge and an un­der­stand­ing of prob­a­bil­ity the­ory and value of in­for­ma­tion calcu­la­tions. What tests could an­swer your ques­tion quickly? How can you perform those tests? If the an­swer can be looked up in a book, which book?

Th­ese are im­por­tant ques­tions, but I think the first two steps of get­ting cu­ri­ous are more im­por­tant. If some­one can mas­ter steps 1 and 2, they’ll be so driven by cu­ri­os­ity that they’ll even­tu­ally figure out how to do step 3 for many sce­nar­ios. In con­trast, most peo­ple who are equipped to do step 3 pretty well still get the wrong an­swers be­cause they can’t re­li­ably ex­e­cute steps 1 and 2.

### Con­clu­sion: Cu­ri­os­ity in Action

A burn­ing itch to know is higher than a solemn vow to pur­sue truth. If you think it is your duty to doubt your own be­liefs and crit­i­cize your own ar­gu­ments, then you may do this for a while and con­clude that you have done your duty and you’re a Good Ra­tion­al­ist. Then you can feel satis­fied and vir­tu­ous and move along with­out be­ing gen­uinely cu­ri­ous.

if you can find within your­self the slight­est shred of true un­cer­tainty, then guard it like a forester nurs­ing a campfire. If you can make it blaze up into a flame of cu­ri­os­ity, it will make you light and ea­ger, and give pur­pose to your ques­tion­ing and di­rec­tion to your skills.

My recom­men­da­tion? Prac­tice the front-kick of epistemic ra­tio­nal­ity ev­ery day. For months. Train your ape-brain to get cu­ri­ous.

Ra­tion­al­ity is not magic. For many peo­ple, it can be learned and trained.

• Also, learn to differ­en­ti­ate be­tween gen­uine cu­ri­os­ity and what I like to call pseudo-cu­ri­os­ity—ba­si­cally, be­ing satis­fied by con­clu­sions rather than con­cepts. Don’t let the two over­lap. This is es­pe­cially hard when con­clu­sions are most of the time read­ily available and of­ten the first item in a google search. In terms of gen­uine cu­ri­os­ity, google has been the bane of my ex­is­tence—I will start off mod­er­ately cu­ri­ous, but in­stead of mov­ing to that higher stage of cu­ri­os­ity, I will be sated by facts and con­clu­sions with­out ac­tu­ally learn­ing any­thing (similar to a guess­ing the teacher’s pass­word situ­a­tion). After a cou­ple hours of do­ing this, I feel very schol­arly and proud of my abil­ity to parse so much in­for­ma­tion, when in re­al­ity all I did was col­lect a bunch of mean­ingless sym­bols.

To com­bat this, I started keep­ing a “note­book of cu­ri­osi­ties”. The mo­ment I get cu­ri­ous, I write what­ever it is I’m cu­ri­ous about, and then write ev­ery­thing I know about it. At this point, I de­ter­mine whether or not any­thing I know is a use­ful spring­board; oth­er­wise, I start from scratch. Then I cir­cle my start­ing node and start the real work, with the fol­low­ing rules:

• Every fact or con­cept I write must fol­low di­rectly from a pre­vi­ous node (never more than two or three rea­son­ing steps away). Most of the time, this re­sults in a very large di­a­gram refer­enc­ing mul­ti­ple pages. I use pen and pa­per only be­cause I like to use it out­side.

• Wikipe­dia is a last re­sort—I don’t want to be tempted by easy facts. I use text­books → arxiv → js­tor → google scholar in or­der of prefer­ence. It’s a lot of work.

• If I skip some rea­son­ing or con­cept be­cause I think it is triv­ial, I write the rea­son why it is triv­ial. Most of the time, this re­sults in some­thing in­ter­est­ing.

Do­ing this has re­vealed many gaps in my knowl­edge. I’ve be­come in­creas­ingly aware of a lack of in­ter­nal­iza­tion of ba­sic con­cepts and modes of think­ing that are nec­es­sary for cer­tain con­cepts. It also forces me to con­front my ac­tual in­ter­est in the sub­ject, rather than my per­ceived in­ter­est.

The ma­jor­ity of what I use it for is math re­lated, so it’s more tai­lored to that use case.

• This goes away when you start to re­al­ize what shit sources like wikipe­dia are. Go through the sources cited by wikipe­dia ar­ti­cles some­times. Real­ize that ev­ery­thing pre­sented to you as fact is gen­er­ally a con­clu­sion come to by peo­ple who are down­right ter­rible at ba­sic rea­son­ing.

Prac­tic­ing rephras­ing ev­ery­thing as be­ing writ­ten in E-Prime can be helpful, tak­ing spe­cial note when nor­ma­tive and pos­i­tive state­ments start be­com­ing mud­dled.

• I have to agree on ter­rible­ness of wikipe­dia. The ap­proach in Wikipe­dia is as such: if you can cite that 2 2 = 5 , then you can write about it, but it is a mor­tal sin against the wikipe­dia to de­rive the 2 2 = 4 from first prin­ci­ples . That’s be­cause wikipe­dia is an en­cy­clo­pe­dia, and the wikipe­dia’s pro­cess only re­ar­ranges knowl­edge while in­tro­duc­ing bi­ases and er­rors; that’s by de­sign. The most com­mon bias is to rep­re­sent both sides equally when they shouldn’t be, the sec­ond most com­mon bias is the side with the most peo­ple edit­ing wikipe­dia win­ning while scream­ing lalala origi­nal re­search can not hear you, when it comes to ba­sic rea­son­ing.

For 2 * 2 , it does gen­er­ally work and the rules would be glossed over, for any­thing more com­pli­cated, well the wikipe­dia equates any logic with any non­sense.

Then, a great deal of web­sites re­gur­gi­tate stuff from wikipe­dia, of­ten mak­ing it very difficult or im­pos­si­ble to find any ac­tual in­for­ma­tion.

That be­ing said, the wikipe­dia is a pretty good on­line link di­rec­tory. Just don’t rely on the stuff writ­ten on the wikipe­dia, and don’t rely on ar­ti­cles that were re­peat­ing the ‘cita­tion needed’ sec­tion, and then were added as the needed cita­tion. And be aware that the se­lec­tion of links can be very bi­ased.

• The thing about cita­tions and against deriva­tions from first prin­ci­ples is de­liber­ate and (so long as par­ti­ci­pa­tion is open to ev­ery­body) I think re­mov­ing it could do more harm than keep­ing it: it’s hard to tell if a deriva­tion from first prin­ci­ples in a field you’re not fa­mil­iar with is valid, so short of some­how mag­i­cally in­creas­ing the num­ber of (say) ed­i­tors with a PhD in physics by a fac­tor of 10, al­low­ing OR would es­sen­tially give free rein to crack­pots, since there wouldn’t be that many peo­ple around who could find the flaws in their rea­son­ing. Right now, they (at least in prin­ci­ple) would have to find peer-re­viewed pub­li­ca­tions sup­port­ing their ar­gu­ments, which is not as easy as post­ing some com­pli­cated deriva­tion and hop­ing no-one finds the er­rors.

One big prob­lem with Wikipe­dia (which I’m not sure could be fixed even in prin­ci­ple) is that some­times you’re not al­lowed to taboo words, be­cause you’re es­sen­tially do­ing lex­i­cog­ra­phy. If the ques­tion is “Was Richard Feyn­man Jewish?”, “He had Jewish an­ces­try but he didn’t prac­tise Ju­daism” is not a good-enough an­swer if what you’re de­cid­ing is whether or not the ar­ti­cle about Feyn­man should be in the cat­e­gory for Jewish Amer­i­can physi­cists; if the ques­tion is “Was an in­fant who has since be­come a trans­sex­ual woman a boy?”, an­swer­ing “it had mas­culine ex­ter­nal gen­i­talia but likely had fem­i­nine brain anatomy” is not good enough if what you’re de­cid­ing is whether the ar­ti­cle should say “She was born as a boy”; and so on and so forth. (There once was an ar­gu­ment about whether ac­celerom­e­ters mea­sure in­er­tial ac­cel­er­a­tion even though both par­ties agreed about what an ac­celerom­e­ter would read in all of the situ­a­tions they could come up with, be­cause they meant differ­ent things by in­er­tial ac­cel­er­a­tion. What hap­pened is that some­one come up with other situ­a­tions such as mag­net­i­cally lev­i­tat­ing the ac­celerom­e­ter or plac­ing it some­where with non-neg­ligible tidal forces, and the par­ties did dis­agree about what would hap­pen. (My view is that then you’re just mi­sus­ing the ac­celerom­e­ter, and draw­ing any con­clu­sions from such cir­cum­stances is as silly as say­ing that re­sis­tance is not what ohm­me­ters mea­sure be­cause if you put a bat­tery across an ohm­me­ter, what it reads is not the in­ter­nal re­sis­tance of the bat­tery. But IIRC, rather than point­ing that out I just walked away and left Wikipe­dia, even though I later came back with a differ­ent user name.)

• Agreed that re­mov­ing the con­di­tion against first prin­ci­ples would per­haps screw stuff up more.

But the at­ti­tude against origi­nal re­search is un­called for. When there’s some­one who mi­s­un­der­stands the quoted ar­ti­cles, you can’t just go ahead and re­fer to first prin­ci­ples, noooo thats origi­nal re­search, and the at­ti­tude is: i’m not ashamed i’m in­stead proud i don’t un­der­stand topic we’re talk­ing about, i’m proud i don’t (be­cause can’t) do origi­nal re­search. Non ex­perts come up with all sorts of weird non­sense in­ter­pre­ta­tions of what ex­perts say, that ex­perts would never even feel need to pub­lish any­thing to dis­pel. And then you can’t ar­gue with them ra­tio­nally, they proudly re­ject any ar­gu­men­ta­tion from first prin­ci­ples.

• Huh, yes. OR shouldn’t be al­lowed into ar­ti­cles but it should on talk pages. (Plus, some peo­ple use a ridicu­lously broad defi­ni­tion of OR. If I pointed out that the speed of light in m/​s is ex­act and the num­ber of me­tres in a yard is ex­act and pro­ceeded to give the ex­act value of the speed of light in im­pe­rial units, and I called that origi­nal re­search of mine any­where out­side Wikipe­dia, I’d be (rightly) laughed away. Hell, even my point­ing out that the word Jewish has sev­eral mean­ings was dis­missed as OR, by some­one who in­sisted that on Wikipe­dia the only pos­si­ble mean­ing of Jewish is ‘some­one who a re­li­able source refers to as Jewish’.

• If I pointed out that the speed of light in m/​s is ex­act and the num­ber of me­tres in a yard is ex­act and pro­ceeded to give the ex­act value of the speed of light in im­pe­rial units

That’s not rea­son­ably called OR on Wikipe­dia ei­ther. See:
http://​​en.wikipe­dia.org/​​wiki/​​Wikipe­dia:No_origi­nal_re­search#Rou­tine_calculations

some­one who in­sisted that on Wikipe­dia the only pos­si­ble mean­ing of Jewish is ‘some­one who a re­li­able source refers to as Jewish’.

That ac­tu­ally sounds pretty rea­son­able to me. If you want to use a more nu­anced con­cept to re­fer to some­one, you could always find a re­li­able source who has used that nu­anced con­cept to re­fer to the per­son. Or you could do the OR some­where else and then some­one else can use that to im­prove the ar­ti­cle.

• If I pointed out that the speed of light in m/​s is ex­act and the num­ber of me­tres in a yard is ex­act and pro­ceeded to give the ex­act value of the speed of light in im­pe­rial units

That’s not rea­son­ably called OR on Wikipe­dia ei­ther. See: http://​​en.wikipe­dia.org/​​wiki/​​Wikipe­dia:No_origi­nal_re­search#Rou­tine_calculations

For some time, they claimed that con­vert­ing ex­act val­ues as ra­tio­nal num­bers (as op­posed to con­ver­sions with a finite num­ber of sigfigs) is not a rou­tine calcu­la­tion. (To be hon­est, I’m not sure I re­mem­ber what even­tu­ally hap­pened. [goes to check] Oh, yeah. The foot­note stayed be­cause we did find a cita­tion. Not that I’d nor­mally con­sider the per­sonal web­site of a cryp­tog­ra­pher as a re­li­able source, but still.)

• Can you give spe­cific ex­am­ples of ar­ti­cles that are bi­ased? Your com­ment and it’s par­ent made me cu­ri­ous about wikipe­dia’s qual­ity :)

• Well, this ar­ti­cle is pretty bad:

http://​​en.wikipe­dia.org/​​wiki/​​Ra­di­a­tion_hormesis

but it used to be even worse. First of all,

that low doses of ioniz­ing ra­di­a­tion (within the re­gion and just above nat­u­ral back­ground lev­els) are beneficial

is hardly a hy­poth­e­sis. A proper hy­poth­e­sis would be “[spe­cific mechanism] ac­ti­vates in pres­ence of ioniz­ing ra­di­a­tion and has such and such con­se­quences”. It would, in­ci­den­tally, be easy to get rid of if it was wrong, or show cor­rect if it was cor­rect, and it’d be in­ter­est­ing even if the effect was too weak to beat the di­rect dam­age from ra­di­a­tion. I barely man­aged to get their pro­posed cause (some un­tapped pow­ers of self re­pair mechanisms) into the defi­ni­tion of the hy­poth­e­sis, ’cause the group that’s watch­ing ar­ti­cle loved to just have a hy­poth­e­sis that low doses of ra­di­a­tion are benefi­cial, what­ever the mechanisms may be, they don’t care, they just pro­pose that effect is here. They don’t care to pro­pose that there’s some self re­pair mechanism that ac­ti­vates by low doses of ra­di­a­tion, ei­ther, they want to pro­pose that the effect is so strong there’s ac­tual benefit.

Also, note the com­plete ab­sence of the refer­ences to ra­di­a­tion cure quacks of early 20th cen­tury—which fall un­der the defi­ni­tion here. And good luck adding those be­cause there’s some core group that’s just re­mov­ing ’em as “ir­rele­vant”. The link se­lec­tion is honed to make it look like some­thing new and ad­vanced that could only have been thought of via some cool counter in­tu­itive rea­son­ing, rather than the first thing ever we thought of when we dis­cov­ered ra­di­a­tion—ohh cool some poi­son, we don’t sure how it works but it must be good in mod­er­a­tion—then it took about 60 years to fi­nally dis­card this hy­poth­e­sis and adopt LNT.

And of course, don’t even dream of adding here the usual evolu­tion­ary counter ar­gu­men­ta­tion to var­i­ous al­lu­sions to some un­tapped pow­ers of hu­man body.

Note: ra­dioac­tive reme­dies such as radon springs, radon caves, heal­ing stones, etc. are a big busi­ness.

• I doubt that se­lect­ing less than half a sen­tence from the lead para­graph of an ar­ti­cle is a very care­ful ap­proach to crit­i­cism.

This ar­ti­cle ac­tu­ally looks pretty typ­i­cal of Wikipe­dia ar­ti­cles on rel­a­tively ob­scure quack­ish biomed­i­cal ideas. It out­lines what the “hy­poth­e­sis” is, then makes clear that it is ex­plic­itly re­jected by var­i­ous peo­ple who have stud­ied the mat­ter. The sub­ject doesn’t have enough his­tory or enough at­ten­tion from skep­tics to get the kind of treat­ment that, say, the ar­ti­cle on home­opa­thy does.

There are two com­pletely junk charts (no scale!) in the ar­ti­cle. Yuck!

When read care­fully, the ar­ti­cle makes clear it’s talk­ing about an effect that even if it ex­isted, would be very close to the noise thresh­old. It re­quires some statis­ti­cal aware­ness — much more than the typ­i­cal aca­demic has, to say noth­ing of Wikipe­di­ans — to rec­og­nize that this is the same thing as say­ing “there’s no rea­son to sus­pect an effect here.”

The pri­mary bias prob­lem here isn’t the ar­ti­cle; it’s that the sub­ject mat­ter is made of bias, at least as far as I can tell. There’s only so many times an ar­ti­cle can say “there are a few noisy ex­per­i­ments, but no­body who ac­tu­ally counts on ra­di­a­tion safety thinks this ex­ists.”

That said, there’s one thing I was re­ally sur­prised to find: the talk page doesn’t seem to be full of sup­port­ers say­ing that their hy­poth­e­sis is be­ing per­se­cuted by the main­stream and skep­tics call­ing them a bunch of names. And that sug­gests to me that im­prove­ment shouldn’t be too hard.

• When read care­fully, the ar­ti­cle makes clear it’s talk­ing about an effect that even if it ex­isted, would be very close to the noise thresh­old. It re­quires some statis­ti­cal aware­ness — much more than the typ­i­cal aca­demic has, to say noth­ing of Wikipe­di­ans — to rec­og­nize that this is the same thing as say­ing “there’s no rea­son to sus­pect an effect here.”

Is this re­ally true? I’m not a part of academia in any sort of way, nor do I have any sort of math or statis­ti­cal train­ing be­yond what’s referred to as Col­lege Alge­bra, and I rec­og­nized im­me­di­ately what the effect be­ing close the noise thresh­old meant.

I’m just won­der­ing if I just have a bet­ter in­tu­itive grasp of statis­tics than your typ­i­cal aca­demic (and what ex­actly you mean by aca­demic...all teach­ers? pro­fes­sors? en­glish pro­fes­sors? stats ma­jors?).

Of course, I read LessWrong and un­der­stand Bayes be­cause of it, so maybe that’s all it takes...

• Is this re­ally true?

Yes. Most of the academy doesn’t use math or have any feel for it. Be­ing forced to take alge­bra when you truly do not give a damn about it re­sults in peo­ple learn­ing enough to pass the test and then for­get­ting it for­ever.

I’m just won­der­ing if I just have a bet­ter in­tu­itive grasp of statis­tics than your typ­i­cal aca­demic (and what ex­actly you mean by aca­demic...all teach­ers? pro­fes­sors? en­glish pro­fes­sors? stats ma­jors?).

Aca­demics are peo­ple who have jobs teach­ing/​lec­tur­ing in ter­tiary ed­u­ca­tion. In a US con­text the low­est you can go and still be an aca­demic is teach­ing at a com­mu­nity col­lege. Alter­na­tively an aca­demic is part of the com­mu­nity of schol­ars, peo­ple who ac­tu­ally care about knowl­edge as such rather than as a means to an end. Most of these peo­ple would not know statis­tics if it bit them on the ass. Re­mem­ber, the world is in­sane.

• Well yea, that’s a very good way to de­scribe it—made of bias. We always be­lieved that if some­thing is bad in ex­cess it’s good in mod­er­a­tion, and then pro­ceeded to ra­tio­nal­ize.

The topic is ac­tu­ally not very ob­scure. It pops up in any dis­cus­sion of Ch­er­nobyl or Fukushima or cold war nu­clear test­ing or radon test­ing of the house­holds or the like, there’s that ‘scep­ti­cism’ to­wards choos­ing lin­ear no thresh­old model as a prior.

The se­ri­ously bad bit is that it is en­tirely miss­ing the his­tor­i­cal refer­ence. When I am look­ing up an ar­ti­cle on some pseu­do­science, I want to see the his­tory of said branch of pseu­do­science. It’s eas­ier to re­ject some­thing like this when you know that it is the first hy­poth­e­sis we made about biolog­i­cal effects of ra­di­a­tion (and the first hy­poth­e­sis we would make about new poi­sons in gen­eral un­til 20th cen­tury).

With re­gards to san­ity of the talk page, that’s what’s most creepy. They get rid of his­tor­i­cal back­ground on this thing, calmly and pur­pose­fully (i don’t know if that’s still the case, go­ing to try adding link to quack ra­di­a­tion cures again). There are hon­est pseudo-sci­en­tists who be­lieve their stuff and they put up all the his­tor­i­cal con­text up them­selves. And there’s the cases whereby you got some sane ra­tio­nal peo­ple with an agenda whose be­havi­our is fairly con­sis­tent with know­ing full well that it is a fraud.

note: the LNT makes sense as a prior based on knowl­edge that the ra­di­a­tion at near the back­ground level is a very minor con­trib­u­tor to num­ber of mu­ta­tions, and if you look at the big pic­ture—num­ber of mu­ta­tions—for doses up to many times back­ground, you’re still vary­ing it by micro­scopic amount around some ar­bi­trary point, and you ab­solutely should choose lin­ear be­havi­our as prior. Still, there’s the ‘scep­tics’ who want to choose zero effect at low doses as a prior be­cause the effects were never shown and oc­cam’s ra­zor blah blah blah.

edit: ahh by the way, i wrote some of that de­scrip­tion out­lin­ing the hy­poth­e­sis, mak­ing it clearer that they start from benefi­cial effects then hy­poth­e­sise some defence mechanisms that are strong enough to can­cel the detri­men­tal effect. That’s such com­pletely back­wards rea­son­ing.

• Over­all, that sounds more like a bunch of folks who have heard of this cool, weird, con­trar­ian idea and are ex­cited by it, rather than peo­ple who are try­ing to per­pe­trate a fraud for per­sonal benefit. Notably, there isn’t any men­tion on the ar­ti­cle of any of the quack treat­ments you men­tion above; there’s no claims of per­se­cu­tion or con­spir­acy; there’s not even much in the way of anti-episte­mol­ogy.

• It’s a pseu­do­science ar­ti­cle from which they re­move the clues by which one could rec­og­nize pseu­do­science, that’s what’s bad.

Also, it should link to past quack treat­ments of 20th cen­tury. I’m go­ing to try again adding those when I have time. It’s way less cool and con­trar­ian when you learn that it was pop­u­lar non­sense when ra­di­a­tion was first dis­cov­ered.

• I’m go­ing to try again adding those when I have time.

If you added those be­fore and they were re­verted, then you should be dis­cussing it on Talk and go­ing for con­sen­sus.

• It’s been ages ago (>5 years i think), i don’t even quite re­mem­ber how it all went.

What’s ir­ri­tat­ing about wikipe­dia is that the rule against origi­nal re­search in the ar­ti­cles spills over and be­comes at­ti­tude against any ar­gu­men­ta­tion not based on ap­peal to au­thor­ity. So you have the folks there, they are cu­ri­ous about this horme­sis con­cept, maybe they are ac­tu­ally just cu­ri­ous, not some pro­po­nents /​ as­tro­turf cam­paign. But they are not in­ter­ested in try­ing to listen to any ar­gu­ment and think if it is cor­rect or not them­selves. I don’t know, maybe it’s an at­tempt to pre­serve own neu­tral­ity on is­sue. In any case it is in­cred­ibly ir­ri­tat­ing. It’s half-cu­ri­os­ity.

• Well, here’s a talk sec­tion of an ar­ti­cle on a sub­ject I know some­thing about. This should give an idea of wikipe­dia’s pro­cess and what kind of con­tent re­sults from it:

http://​​en.wikipe­dia.org/​​wiki/​​Talk:Bayesian_network

Here’s an­other one:

http://​​en.wikipe­dia.org/​​wiki/​​Confounding

The very first sen­tence is wrong.

• “A bat and a ball cost \$1.10 in to­tal. The bat costs \$1.00 more than the ball. How much does the ball cost?”

I had fol­low­ing (in rapid suc­ces­sion): 10 cents, whoops it adds up to 120 cents , aha, 5 cents, adds up to 110 , done.

Doesn’t re­ally mat­ter what stupid heuris­tic you try if you ver­ify the re­sult. I can of course do: let a+b=1.1 , a=b+1 , b+1+b=1.1 , 2b=0.1 , b=0.05 , but it takes a lot longer to write, and to think, and note the ab­sence of ver­ifi­ca­tion step here.

The “No! Alge­bra” is sure fire way to do things slower. Ver­ifi­ca­tion and dou­ble check­ing is the key imo. Alge­bra is for un­wieldy prob­lems where you can’t test guesses quickly, failed to guess, have to use pen­cil and pa­per, etc. When you rely on short term mem­ory you re­ally could be best off try­ing to in­tu­itively get the an­swer, then check­ing it, then re­ward­ing your­self when cor­rect (if ver­ifi­ca­tion is pos­si­ble)

• whoops is more like some par­allel pon­der­ing just of how stupid i must be.

• In­stinc­tively my thought pro­cess goes: The dol­lar is the ex­tra, then the ten cents is split, \$0.05, done (plus or minus a dou­ble check). I can sense the \$0.10 an­swer try­ing to be sug­gested in­stantly in the back­ground, but it has a frac­tion of a sec­ond be­fore it gets cut off, pre­sum­ably be­cause this is a kick type I’ve done 10,000 times.

For­mal alge­bra is the very slow (in rel­a­tive terms) but re­li­able an­swer.

• Well yea, the pro­cesses at that timescale are not even ex­actly se­rial. When the 10 cents ap­pears i just de­rail into pon­der­ing how stupid I must be to have 10 cents even pop up con­sciously, while 5 cents pops up.

When we were taught math at school we of­ten had to do ver­ifi­ca­tion step. Then i was do­ing con­tests a fair bit and you care to check your­self there, you solve each prob­lem and check the an­swer, then in the end if you solved ev­ery­thing you go over them again and dou­blecheck, triplecheck. We had few hard prob­lems on tests in­stead of many easy ones. You of­ten had to think—how do i check this?

It seems not ev­ery­one’s taught this way, some peo­ple have self es­teem-boost­ing cul­tural stuff in mind, and the self doubt can be seen as worst thing ever cul­turally. In US movies there’s always some­one who’s like, i can’t do it, i can’t do it, then the hero talks them into jump­ing over the gap any­way, and they do it, which is just silly.

For other ex­am­ple, say, I face some­thing like monty hall prob­lem. I think—how can i solve it so that i can be sure in the an­swer? Well, the foolproof way is to con­sider all the pos­si­bil­ities, which i can do rather rapidly by vi­su­al­iz­ing it. I don’t need to think in terms of prob­a­bil­ities. There’s other im­por­tant thing here: re­duc­tion­ism. One need to know what things are de­rived, and that de­rived things aren’t ‘bet­ter’ or ‘right’. The prob­a­bil­ities are sub­sti­tute for eval­u­at­ing a po­ten­tially in­finite num­ber of pos­si­ble wor­lds and count­ing them. If you ever have con­flict be­tween some first prin­ci­ples rea­son­ing and some ad­vanced high level rea­son­ing, the ad­vanced rea­son­ing is not the one that’s work­ing cor­rectly, prob­a­bly you’re mis­ap­ply­ing it.

I re­call many ar­gu­ments over physics on some fo­rum with some guy who just didn’t un­der­stand the re­duc­tion­ism. His bar­rels would float due to Archimedes law, not due to pres­sure differ­ence; then it gets con­fus­ing when you have a bar­rel fall down into wa­ter (dy­nam­i­cal situ­a­tion), and he would try to use high­est level con­cepts he can think of. Or when you have sub­marine stuck to the seafloor. Or plug­ging your sink with a piece of sty­rofoam that you think would float, by Archimedes law, ex­cept it won’t, be­cause there’s no pres­sure be­ing ap­plied to it’s bot­tom. The peo­ple who don’t get re­duc­tion­ism, they have the pres­sure differ­ence first prin­ci­ples thing say­ing the sty­rofoam won’t float, and archimedes law that they mis­ap­ply and it says it will, and archimedes law sounds ad­vanced so they think its the one that’s right.

• Hav­ing worked on the Voyn­ich Manuscript (which you namecheck above) for over a decade now, I’d say that un­cer­tainty isn’t just a feel­ing: rather, it’s the de­fault (and in­deed nat­u­ral) state of knowl­edge, whereas cer­tainty is nor­mally a sign that we’ve some­how failed to grasp and ap­pre­ci­ate the limits and na­ture of our knowl­edge.

Un­til you can erad­i­cate the itch that drives you to want to make knowl­edge fi­nal, you can never be prop­erly cu­ri­ous. Real knowl­edge doesn’t do fi­nal or the last words on a sub­ject: it’s con­di­tional, par­tial, con­strained, and heuris­tic. I con­tend that you should train your ape-brain to stay per­ma­nently cu­ri­ous: al­most all cer­tain knowl­edge is ei­ther fake or tau­tolo­gous.

• I con­sis­tently fail sev­eral times over at this. I always feel I DO know ev­ery­thing worth know­ing, and while ob­vi­ously wrong can’t come up with any salient coun­terex­am­ples. Prob­a­bly re­lated to mem­ory prob­lems I have, I don’t seem able to come up with ex­am­ples or coun­terex­am­ples of any­thing ever.

And when I do con­sider mul­ti­ple pos­si­bil­ities, they never seem to mat­ter for what ac­tions I should take, which drains any mo­ti­va­tion to find out the an­swer if it takes more than 30 sec­onds of googling or I hap­pen to not be at my com­puter when the ques­tion oc­curs.

All the in­for­ma­tion I take in seems to be about new ideas, not ev­i­dence for or against old ones.

All this is ob­vi­ously ab­surd and I’m a bad ra­tio­nal­ist and de­serve ex­tremely low sta­tus for this heinous lack of virtue, be­ing but a bur­den to the tribe! Woe is me!

Help?

• Good. Let’s see if we can make progress.

1. New habit: Every time you’re wrong, write down what you were wrong about.

2. Play ‘the cal­ibra­tion game’: Use Wits & Wagers cards and give your con­fi­dence in­ter­vals. You’ll prob­a­bly find that 40% of the time, the cor­rect an­swer was out­side your 90% con­fi­dence in­ter­val. Write down all those failures.

3. If the differ­ent hy­pothe­ses don’t mat­ter for which ac­tions you take, you’re ei­ther bad at re­al­iz­ing the de­ci­sion-the­o­retic im­pli­ca­tions of var­i­ous hy­pothe­ses, or you’re bad at spend­ing your time think­ing about things that mat­ter. Which do you think it is?

4. Rarely is new in­for­ma­tion not ev­i­dence for or against old ideas. Maybe you need more prac­tice in model-build­ing? This is a sep­a­rate post I’d like to write at some time; I’m not sure what use­ful thing I can say about it now.

5. Re: your “heinous lack of virtue.” Re­ward your­self for effort, not for re­sults. You have more con­trol over the former.

• Awe­some. I’m go­ing to keep that in mind. I only have a quib­ble about

Re­ward your­self for effort, not for re­sults.

That could lead me to try but nowhere near as hard as I can, and mak­ing ex­cuses when I fail.

• To clar­ify: re­ward your­self for tak­ing new and im­proved ac­tions, or for tak­ing more of the right kind of ac­tions, even if these ac­tions don’t im­me­di­ately cause the de­sired re­sults. Once your new level be­comes a habit, stop re­ward­ing your­self and re­ward the next level up. Rinse and re­peat un­til you’re close enough to a goal that it makes sense to re­ward your­self di­rectly for the re­sults you ac­tu­ally want.

• I con­tinue to cel­e­brate a job well done even if it’s force of habit, if only to give my­self bet­ter in­cen­tives to form more good habits.

• There’s sig­nal­ing effort (es­pe­cially to your­self), and then there’s effort. You want to re­ward effort but not sig­nal­ing effort.

Often one will make a cur­sory at­tempt at some­thing, but with the goal of sig­nal­ing to them­selves or oth­ers that they put in effort or tried rather than do­ing what was most likely to ac­com­plish the goal. This leads to state­ments like “I tried to get there on time” or “I did ev­ery­thing I was sup­posed to do.” That’s ex­cuse mak­ing. Don’t re­ward that.

In­stead, re­ward your­self to the ex­tent that you did that which you had rea­son to be­lieve was most likely to work, in­clud­ing do­ing your best to figure that out, even if it didn’t suc­ceed. Do the op­po­site if you didn’t make the best de­ci­sions and put forth your best efforts, even if you do suc­ceed.

The dan­ger is that effort is much eas­ier to self-de­ceive about than re­sults—and the peo­ple who need this the most will of­ten have the most trou­ble with that. Not enough at­ten­tion is paid to this prob­lem, and it may well de­serve a top level post.

• you need both the in­stances you are right and the in­stances you are wrong to do cor­rect stats… oth­er­wise i can have 90% con­fi­dence, be wrong one time out of 10, and 100% of those times that i am wrong, have the an­swer out­side 90% con­fi­dence in­ter­val.

1. I so far have a 100% failure rate in es­tab­lish­ing habits that in­volve writ­ing things down or in other ways ex­ter­nal­ize mem­ory.

2. I don’t have any such cards. I also doubt pay­ing a game once for 5 min­utes will help much, and akra­sia and stress will pre­vent any more than that.

3. Of those, ab­solutely the lat­ter, but nei­ther seems plau­si­ble.

4. I have zero con­trol over both, be­cause akra­sia.

… my “not true re­jec­tion!” alarm is go­ing of but I can’t seem to find any­thing to do with that in­for­ma­tion ei­ther.

• Yeah, sounds like you have a gen­eral mo­ti­va­tion prob­lem that needs fix­ing be­fore you can get bet­ter at a lot of other things.

• Not quite, but it seem un­likely this con­ver­sa­tion will get fur­ther with­out get­ting into men­tal prob­lems I re­ally don’t want to dis­cus with some­one whose opinion I care about, like you.

• I find your hon­esty in these posts in­spiring. I wish more peo­ple had such courage.

• Ah, yea. Back­ing out of a con­ver­sa­tion and re­tract­ing all my posts as soon as it gets un­com­fortable sure is coura­geous!

• It still took a good bit of nerve to make those posts.

• Sure.

1. I so far have a 100% failure rate in es­tab­lish­ing habits that in­volve writ­ing things down or in other ways ex­ter­nal­ize mem­ory.

This is true for me as well. Which is why I try to rely on pro­grams that prompt me to re­ply at ran­dom in­ter­vals through com­puter pop­ups or sms, rather than habit.

I highly doubt you have zero con­trol over effort. Akra­sia limits your abil­ity to act on willpower, it doesn’t negate willpower en­tirely. Re­ward your­self for those 30 sec­ond googling bursts if noth­ing else.

I’m se­ri­ous, have a jar of mini choco­late chips by your desk and pop one in your mouth ev­ery time you google an in­ter­est­ing ques­tion on scholar or wikipe­dia.

• have a jar of mini choco­late chips by your desk and pop one in your mouth ev­ery time you google an in­ter­est­ing ques­tion on scholar or wikipe­dia.

Is there any ev­i­dence this works? 1) Does the brain treat these dis­cre­tionary plea­sures as re­in­force­ment? 2) If it does, do at­tri­bu­tion effects un­der­mine the effi­cacy? Re­search in at­tri­bu­tion effects show that ex­trin­sic re­wards some­times un­der­mine in­trin­sic in­ter­est, i.e., cu­ri­os­ity. “Nega­tive effects are found on high-in­ter­est tasks when the re­wards are tan­gible, ex­pected (offered be­fore­hand), and loosely tied to level of perfor­mance.”

• have a jar of mini choco­late chips by your desk and pop one in your mouth ev­ery time you google an in­ter­est­ing ques­tion on scholar or wikipe­dia.

Disagree. The tar­get of your ad­vice has re­ported se­ri­ous health prob­lems (and his akra­sia would prob­a­bly be a lot eas­ier to over­come if it weren’t for the health prob­lems, ac­cord­ing to my mod­els (which are based only on what he has posted to LW and on in­for­ma­tion not spe­cific to him)) so I would ad­vise him not to choose what to eat for its re­ward value.

To help him de­cide what weight to give my ad­vice, I will add that I have had se­ri­ous health prob­lems for the last 40 years.

More­over, I have se­ri­ous doubts about the use­ful­ness of set­ting up blatantly ar­tifi­cial (i.e., self-im­posed for the pur­pose of con­di­tion­ing one­self) cause-and_effect re­la­tion­ships be­tween de­sired changes in be­hav­ior and re­wards even when the re­wards have no ex­pected nega­tive effect on health.

• You’re right. This was very poorly con­sid­ered ad­vice. I’m ashamed to ad­mit I kind of rec­og­nized that as I was writ­ing it, but posted it any­ways for rea­son­able-sound­ing jus­tifi­ca­tions that now sus­pi­ciously elude mem­ory.

• I kind of rec­og­nized that as I was writ­ing it, but posted it anyways

I know the feel­ing (from times I have given ad­vice).

• I’m se­ri­ous, have a jar of mini choco­late chips by your desk and pop one in your mouth ev­ery time you google an in­ter­est­ing ques­tion on scholar or wikipe­dia.

maaaan i have to con­di­tion my­self NOT to google in­ter­est­ing ques­tions else i can’t get any work done for my job. But i see what you mean, that may work for con­di­tion­ing one­self to work.

• (A cau­tion: I’ve found that naive im­ple­men­ta­tions of the “re­ward one­self with candy” method for over­com­ing akra­sia don’t work be­cause it be­comes too tempt­ing to just eat the candy for no rea­son. It has been sug­gested to me that it might help to ex­plic­itly write down be­fore­hand ex­actly what ac­tions jus­tify a re­ward, but I haven’t got­ten around to test­ing this yet. In­di­vi­d­ual re­sults may vary; fur­ther re­search is needed.)

• Post some hy­pothe­ses and/​or pre­dic­tions at Less Wrong. There’s a least a rea­son­able chance that peo­ple will tell you if you’re mis­taken.

• Ex­er­cise 2.2: Make plans for differ­ent wor­lds… Maybe you live in a world where you’d im­prove your cog­ni­tive func­tion by tak­ing nootrop­ics, or maybe you live in a world where the nootrop­ics would harm you.

On the bright side, this is pretty much the thought pro­cess I go through when­ever I don’t know the right an­swer to some­thing. On the other hand (“on the dark side”?) I think my au­to­matic in­stinct is “there’s no sci­en­tific con­sen­sus on this that I’ve read about in my text­books...there­fore this is a Per­ma­nent Blank in my map and I just have to live with it.” Even if I’m not up to go­ing out and do­ing the origi­nal re­search to an­swer Ques­tion X, I sus­pect that I would of­ten be wrong about there be­ing no already-in­ves­ti­gated an­swers. Look­ing a given topic up and read­ing about all the con­flict­ing the­o­ries, rather than a sci­en­tific con­seusus, still pro­vides more in­for­ma­tion than not read­ing up on it at all.

And again, thank you for the ex­cel­lent ar­ti­cle! I re­ally like this one.

• Cu­ri­os­ity is one pos­si­ble mo­ti­va­tion that forces you to ac­tu­ally look at ev­i­dence. Fear is more re­li­able and can be used when cu­ri­os­ity is hard to man­u­fac­ture.

• Cu­ri­os­ity is one pos­si­ble mo­ti­va­tion that forces you to ac­tu­ally look at ev­i­dence. Fear is more re­li­able and can be used when cu­ri­os­ity is hard to man­u­fac­ture.

Fear can be pow­er­ful but it is far from re­li­able and usu­ally not used best for on­go­ing mo­ti­va­tion of any kind.

• It de­pends on the kind of fear. The fear of go­ing off my bee­minder roads is good enough to mo­ti­vate me to stay on them. YMMV.

• It de­pends on the kind of fear. The fear of go­ing off my bee­minder is good enough to mo­ti­vate me to stay on them. YMMV.

It quite pos­si­bly would (vary). I have de­vel­oped some­thing of a “@#%@# you!” at­ti­tude to threats that are on­go­ing and try to re­serve fear as an ex­cep­tion-ori­ented mo­ti­va­tion de­vice.

• I don’t think I could re­ally feel fear about some­thing in far mode think­ing.

• I worry that fear may par­a­lyze. Cu­ri­os­ity seems more likely to spring some­one into ac­tion. Th­ese effects prob­a­bly vary be­tween per­sons.

• If fear par­a­lyzes, maybe it’s best used in bursts at times when you don’t im­me­di­ately need any­thing done and can spend some time on reeval­u­at­ing ba­sic as­sump­tions. I won­der if there should be a genre of fic­tion that’s analo­gous to hor­ror ex­cept aimed at pro­mot­ing epistemic para­noia. I’ve heard the RPG Mage: the As­cen­sion cited in that con­text. I guess there’s also movies like the Ma­trix se­ries, the Tru­man Show, In­cep­tion. One could have an epistemic coun­ter­part to Hal­loween.

• I just watched The Tru­man Show a few days ago. I in­ter­preted it as a story about a schizophrenic who keeps get­ting cra­zier, even­tu­ally ex­pe­rienc­ing a full out break and dy­ing of ex­po­sure. The scenes with the pro­duc­tion crew and au­di­ence are ac­tu­ally from the per­spec­tive of the schizophrenic’s imag­i­na­tion as he tries to ra­tio­nal­ize why so many ap­par­ently weird things keep hap­pen­ing. The scenes with Tru­man in them are Tru­man’s ret­ro­spec­tive ex­ag­ger­a­tions and dis­tor­tions of events that were in re­al­ity rel­a­tively in­nocu­ous. All this al­lows you to see how real some schizophren­ics think their delu­sions are.

• I had never heard any­body in­ter­pret­ing it that way be­fore.

• I’ve never heard that one be­fore, but there is a psy­chi­a­tric ill­ness in which peo­ple be­lieve them­selves to be watched at all times and that the world around them was cre­ated speci­fi­cally for them, et cetera. It’s called Tru­man Syn­drome.

All I know about schizophre­nia I know from the co­pi­ous num­ber of psy­chi­a­tric vol­umes and mem­o­irs I’ve read. I have an older cousin with para­noid schizophre­nia, but I don’t even re­mem­ber the last time I spoke to him.

• an epistemic coun­ter­part to Hal­loween.

I’m now imag­ing­ing chil­dren wear­ing signs with cog­ni­tive bi­ases writ­ten on them run­ning around door to door, and peo­ple an­swer­ing the door, ut­ter­ing brief ar­gu­ments, and re­ward­ing each kid with pa­per­back sci­ence fic­tion if the kid can cor­rectly iden­tify the fal­lacy.

• What I had in mind was re­plac­ing rit­u­als in­volv­ing the fear of be­ing hurt with rit­u­als in­volv­ing the fear of be­ing mis­taken. So in a more di­rect anal­ogy, kids would go around with signs say­ing “you have de­voted your whole ex­is­tence to a lie”, and threaten (emp­tily) to go into de­tails un­less they were given candy.

• Upvoted for mak­ing me laugh un­til it hurt.

You could prob­a­bly get suffi­ciently-twisted kids to do this on the usual Hal­loween. Dress them up as pro­fes­sors of philos­o­phy or some­thing; it’d be far scarier than zom­bie cos­tumes. (This would ac­tu­ally be fan­tas­tic.)

Alter­nately, dress up as a “philoso­pher” (Large fake beard and pipe, maybe?), set up some­thing like a fake re­tiring room on your front porch, tell small chil­dren that their daily lives are based on sub­tly but crit­i­cally bro­ken premises, and give them candy. (Don’t ac­tu­ally do this, un­less your neigh­bors love or hate you un­con­di­tion­ally. Or you’re mov­ing away soon.)

• You could prob­a­bly get suffi­ciently-twisted kids to do this on the usual Hal­loween. Dress them up as pro­fes­sors of philos­o­phy or some­thing; it’d be far scarier than zom­bie cos­tumes. (This would ac­tu­ally be fan­tas­tic.)

Alter­nately, dress up as a zom­bie philoso­pher and sham­ble around moan­ing “quaaaalia” in­stead of “braaaains”.

• Last Hal­loween i dressed as a P-zom­bie. I ex­plained to any­body who would listen that i had the same phys­i­cal com­po­si­tion as a con­scious hu­man be­ing, but was not in fact con­scious. I’m not sure that any of them were con­vinced that i re­ally was in cos­tume.

• For this to be re­ally con­vinc­ing and spoooky, you could stay in char­ac­ter:

Hal­loween party at­ten­dant: Hi rad­i­cal_nega­tive_one, what are you dressed as?
con­fed­er­ate: rad­i­cal_nega­tive_one is a p-zom­bie, who acts just like a real per­son but is not ac­tu­ally con­scious!
rad­i­cal_nega­tive_one: That’s not true, I am con­scious! I have qualia and an in­ner life and ev­ery­thing!

• rad­i­cal_nega­tive_one: (To con­fed­er­ate:) No, you’re the p-zom­bie, not me! (To Hal­loween party at­ten­dant:) They’re get­ting ev­ery­where, you know. They look and act just like you and me, phys­i­cally you can’t tell, but they have no soul! They’re just dead things!! They sound like us, but noth­ing they say means any­thing, it’s just noises com­ing out of a ma­chine!!! Your best friend could be a p-zom­bie!!!! All your friends could be p-zom­bies!!!!!

con­fed­er­ate It’s all true! And he’s one of them! Say, how do I know you’re not a zom­bie?

• con­fed­er­ate: No, rad­i­cal_nega­tive_one. You are the demons

And then rad­i­cal_nega­tive_one was a zom­bie.

• Large fake beard and pipe, maybe?

And tweed jacket with leather patches on the elbows, don’t for­get.

• Ah, yes. That would satisfy nicely.

• Oh, great. Now I have half a mind to go out this Hal­loween for the first time since ju­nior high school dressed as a philos­o­phy pro­fes­sor to scare mid­dle aged house­wives with ra­tio­nal­ist ar­gu­ments.

And I would carry out my threat of giv­ing de­tails as to how they have de­voted their whole ex­is­tences to a lie. I do that a lot, ac­tu­ally, just not in a cos­tume and gen­er­ally not by com­ing up to stranger’s houses for candy.

• kids would go around with signs say­ing “you have de­voted your whole ex­is­tence to a lie”, and threaten (emp­tily) to go into de­tails un­less they were given candy.

But that’s the fear of learn­ing that one is mis­taken, not the fear of be­ing mis­taken...

• You’re right, of course. I don’t think a fully di­rect anal­ogy is pos­si­ble here. You can’t re­ally threaten to make some­one have been wrong.

• “You always thought I wasn’t the kind of per­son who would TP your house on Hal­loween, but if you don’t give me candy I’ll make you have been wrong all along!”

• “Hah, got you—I ac­tu­ally thought all along that you were the kind of per­son who would TP my house if and only if de­nied candy on Er­rorwe’en!”

“Okay, and given your be­liefs, are you gonna give me candy?”

″...Have a Snick­ers.”

• I can eas­ily imag­ine a sci-fi hor­ror story in which some­one is pow­er­ful enough to do that. You’d have to demon­strate it first, of course, and the story would have to take some time to care­fully ex­plore what changes when some­one is made to have been wrong, but it seems plau­si­bly doable.

• Emp­tily? Just how sure of that are you?

(I like skit­tles.)

• Yes! Give me a Three Mus­ke­teers bar or I shall prove that you have de­voted your en­tire ex­is­tence to a lie us­ing only logic and rhetoric.

• What we need is a ra­tio­nal­ist hell-house.

http://​​en.wikipe­dia.org/​​wiki/​​Hell_house

• Look­ing back it seems I use cu­ri­os­ity more for hours or days-long knowl­edge-gain­ing quests, e.g. im­mers­ing my­self in a new aca­demic field, whereas I use fear more when philoso­phiz­ing on my own, es­pe­cially about AI/​FAI. In­tro­spec­tively it seems that fear is more suited to ex­am­in­ing my own thoughts or thoughts I iden­tify with whereas cu­ri­os­ity is more suited to ex­am­in­ing ideas that I don’t already iden­tify with or things in my en­vi­ron­ment. I sus­pect this is be­cause peo­ple gen­er­ally over­es­ti­mate the worth of their own ideas while un­der­es­ti­mat­ing the worth of oth­ers’—nega­tive mo­ti­va­tions re­li­ably act as crit­i­cal in­duc­tive bi­ases to coun­ter­bal­ance sys­tem­atic over­con­fi­dence in one­self, whereas pos­i­tive mo­ti­va­tions re­li­ably act as char­i­ta­ble in­duc­tive bi­ases to coun­ter­bal­ance sys­tem­atic un­der­con­fi­dence in oth­ers. As you say, it’s prob­a­ble that oth­ers would have differ­ent cog­ni­tive quirks to bal­ance and coun­ter­bal­ance.

• Once, I ex­plained the Cog­ni­tive Reflec­tion Test to Riley Crane by say­ing it was made of ques­tions that tempt your in­tu­itions to quickly give a wrong an­swer. For ex­am­ple:

This could use spoiler tags, or ideally some sub­sti­tute: it’s use­ful for peo­ple to have a chance to be ad­ministered the CRT un­awares (lest they imag­ine by hind­sight bias that they would not have been mis­led, or oth­ers lose the chance to test them).

• In feel­ing that you do not know the an­swer, Luke sug­gests to “Think of things you once be­lieved but were wrong about.” Why not take it a step fur­ther and say

1.3 When think­ing about a time when you were wrong, think about how right be­ing wrong feels*up un­til the mo­ment you re­al­ize we are wrong.

In re­flect­ing on times when I have been wrong what I find most dis­turb­ing is not what I was wrong about, but the de­gree to which be­ing wrong is cog­ni­tively similar to be­ing right. In col­lege, I went to an Eliz­a­beth Lof­tus lec­ture where she shock­ingly an­nounced that the de­gree of con­fi­dence you have in a mem­ory has no effect on its val­idity or fal­lacy. The more I think on this idea the more I find it to be true. Be­ing wrong feels like be­ing right. If that is the case how can I ever be cer­tain in any ideas? Luke sug­gests tools like a cog­ni­tive re­flec­tion test to work to­wards un­cov­er­ing when you are wrong. How­ever, is this re­ally a method for un­cov­er­ing cog­ni­tive blind spots, or is it the rigor­ous ap­pli­ca­tion of an ex­ist­ing paradigm of prob­lem solv­ing? I would ar­gue it is the later. It is con­ve­nient that the ex­am­ple given is a math prob­lem, but what hap­pens when you need to cog­ni­tively re­flect over false in­tu­ition in an­other realm (you men­tioned so­ciol­ogy). Think­ing NO! Alge­bra might help some prob­lems, but not all. How do you jus­tify your be­lief in the ap­pli­ca­tion of alge­bra to a situ­a­tion? How do you dis­cover new paradigms of prob­lem solv­ing? Eliezer states

When you’re re­ally cu­ri­ous, you’ll grav­i­tate to in­quiries that seem most promis­ing of pro­duc­ing shifts in be­lief, or in­quiries that are least like the ones you’ve tried be­fore.

I agree with this state­ment. Is it illog­i­cal to think in­quires that are least like the ones I have tried be­fore are the ones that I have such a low con­fi­dence in I have ac­tu­ally dis­missed them? Or in other words the ideas I ac­tively dis­be­lieve. I ar­gue that a truly cu­ri­ous per­son would ac­tively work to see the truth in things he or she knows to be wrong. An episte­molog­i­cal take on “Keep your friends close but your en­e­mies closer.” If you think the­ism is ab­surd, per­haps you should be more cu­ri­ous about it. I am not ad­vo­cat­ing com­plete rel­a­tivism or any­thing close to that. I do think there is right and wrong. But I think look­ing for the right in what you think is wrong will bet­ter mark the path of mod­er­a­tion.
What is right is mod­er­a­tion.

• I ap­prove strongly! Publi­cly-posted ex­er­cises may yield prac­tice, prac­tice yields habit, and habit yields changed be­hav­ior. Devel­op­ing deeper, more-fo­cused cu­ri­os­ity would be a grand step to­wards be­com­ing more awe­some. But!

( sum­mary: It is im­por­tant to prac­tice this skill at ap­pro­pri­ate times, like when it is use­ful and fea­si­ble to work on an­swer­ing the given ques­tion, and not just at ran­dom, or when­ever it’s con­ve­nient to sched­ule the prac­tice. I plan to at­tach a re­minder to my re­search to-do list.)

Alright, says I, this ex­er­cise seems plau­si­ble enough. So I’ll start prac­tic­ing this ex­er­cise and see how well it works. But how ought I do this for reg­u­lar prac­tice?

At first, I thought about walk­ing through this list as part of my morn­ing rou­tine. But how would I ac­tu­ally do that? The ex­er­cise needs an unan­swered ques­tion, and I don’t gen­er­ally have a fresh, new, im­por­tant ques­tion ev­ery morn­ing. So:

• If I pick an ar­bi­trary ques­tion I don’t know the an­swer to, I should be able to im­port the feel­ing of un­cer­tainty, but clear eval­u­a­tion of the con­se­quences of be­ing wrong will be de­mo­ti­vat­ing, and the Li­tany of Gendlin will be silly.

• If I pick an im­por­tant ques­tion I don’t know the an­swer to, then I sus­pect that I can use this ex­er­cise to get my­self quite mo­ti­vated to bet­ter an­swer it. In the con­text of my morn­ing rou­tine, though, this would be ter­rible. My morn­ing rou­tine is op­ti­mized for satis­fy­ing ba­sic needs, wak­ing up quickly, and get­ting to my office at a rea­son­able hour. This form of the ex­er­cise, if effec­tive, would ac­tu­ally con­flict sharply with my long-term goals.

In fact, I sus­pect that in­tense cu­ri­os­ity about any ran­dom topic, at any ran­dom time, is ac­tu­ally a bad idea. For in­stance, if I’m on a sev­eral-hours drive, by my­self, and by ran­dom mus­ing I be­come in­tensely cu­ri­ous about how to re­solve (say) an­thropic prob­a­bil­ities. I don’t know much about the real ar­gu­ments in an­throp­ics, so I’m just go­ing to be frus­trated at my situ­a­tion. Even­tu­ally, I’ll think about some­thing else, and the cu­ri­os­ity will pass be­fore I can use it. (And now I’ve as­so­ci­ated that par­tic­u­lar cu­ri­os­ity with frus­tra­tion, and my in­abil­ity to satisfy it, and per­haps next time I won’t be­come cu­ri­ous so read­ily.)

What we want, then, is the abil­ity to get cu­ri­ous about a ques­tion be­cause we rec­og­nize that we’d like to an­swer it. When we have a ver­bal jus­tifi­ca­tion to re­solve some ques­tion cor­rectly, we want to in­voke the ap­pro­pri­ate emo­tions as mo­ti­va­tion to do so. We want to prac­tice this in­vo­ca­tion. I don’t see how to do this as part of my morn­ing rou­tine, to ad­mit con­ve­nient, reg­u­lar prac­tice.

So, I now plan at­tach the re­minder near where I man­age the rele­vant to-do list. Any time Ι start a block of time aimed (even in­di­rectly) at an­swer­ing some ques­tion, I’ll run through this ex­er­cise for that ques­tion. I hope to de­velop this habit so that when I’m read­ing for plea­sure, or satis­fy­ing my own in­ter­est, or even do­ing re­search for writ­ing blog posts, or just dis­cussing some ques­tion, I’ll first run this ex­er­cise—or, even­tu­ally, just be cu­ri­ous.

Does any­one else plan to ac­tu­ally carry out this ex­er­cise? How will you hold your­self to reg­u­lar­ity?

• Clos­ing my eyes gives me only the feel­ing of hav­ing defen­sively headed a long ball in soc­cer a few hours ago. Some­times I try to think and noth­ing seems to hap­pen :)

VoI shouldn’t be ab­bre­vi­ated (even with hy­per­link).

Think­ing about how I’ve been mis­taken in the past feels pretty bad for me—akin to true em­bar­rass­ment. But I sup­pose it’s al­most the only rea­son I’m ever cau­tiously un­cer­tain, and that seems sad.

I re­ally value your sug­ges­tion to pur­pose­fully cul­ti­vate delight-based ex­plo­ra­tion, in­stead of merely look­ing to min­i­mize re­gret (even fairly as­signed re­gret at com­ing up short of bound­edly-op­ti­mal-ra­tio­nal, with­out con­fus­ing out­come for ex­pected out­come in hind­sight).

• I re­ally value your sug­ges­tion to pur­pose­fully cul­ti­vate delight-based ex­plo­ra­tion, in­stead of merely look­ing to min­i­mize regret

Maybe I should have em­pha­sized this more.

• Set­ting step one as “Feel that you don’t already know the an­swer” fits with Loewen­stein (1994)’s “gap the­ory of cu­ri­os­ity”, sum­ma­rized by Cooney (2010):

[Loewen­stein’s] the­ory is that cu­ri­os­ity hap­pens when peo­ple feel a gap in their knowl­edge about some­thing… Lay­ing out a ques­tion and invit­ing oth­ers to pon­der it will help keep the in­di­vi­d­ual’s at­ten­tion, be­cause it gets them men­tally in­volved and be­cause there’s an el­e­ment of un­ex­pect­ed­ness. This is why cliffhang­ers are of­ten used at the end of tele­vi­sion soap op­eras, to get view­ers to tune in to the next epi­sode, or at the end of chap­ters in a thriller to keep read­ers glued to the page.

Tak­ing the gap the­ory a step fur­ther, Har­vard physics pro­fes­sor Eric Mazur has de­vel­oped a teach­ing tool he calls con­cept test­ing. Mazur has found that pos­ing a ques­tion to stim­u­late cu­ri­os­ity and then ask­ing ques­tions to vote pub­li­cly on the an­swer make them more en­gaged and cu­ri­ous about the out­come. Mazur has also found that fos­ter­ing dis­agree­ment among stu­dents is par­tic­u­larly effec­tive at stim­u­lat­ing in­ter­est. Not only has their cu­ri­os­ity been stim­u­lated, but learn­ing the an­swer now has per­sonal rele­vance — it will show whether or not they’re smarter than their class­mates.

• Another idea from Anna Sala­mon is just to brain­storm a ton of ques­tions on the topic you want to get cu­ri­ous about for a pre­de­ter­mined pe­riod of N min­utes. Very limited data sug­gests this method works sig­nifi­cantly bet­ter for me.

• Am I the only one who searched the phrase “I see you start to an­swer a ques­tion, and then you stop, and I see you get cu­ri­ous.” to see who it referred to?

• So, should I start con­sum­ing but­ter half-sticks?

• The study had just 27 par­ti­ci­pants, and wasn’t dou­ble blind. While it was an in­ter­est­ing ex­per­i­ment, I cer­tainly wouldn’t act on it, ex­cept per­haps to read an­other, similar ex­per­i­ment.

• It doesn’t seem like the cost of a self ex­per­i­ment here would be very high, and you are the only re­search sub­ject that re­ally mat­ters to your­self...

• At least eat them with some­thing, ew. Melt it in a pan and fry some­thing in it.

• 3 Mar 2012 20:29 UTC
0 points

Cu­ri­ous about what though? It seems like a very im­por­tant piece of the above les­son is miss­ing if we have no guidance as to what we should be cu­ri­ous about. It does me no good, per­haps no small amount of harm, to be in­tensely cu­ri­ous about the de­tails of a fic­tional world. I ought not be cu­ri­ous about the per­sonal life of my neigh­bor. And while cu­ri­os­ity about in­sects may serve some, it’s un­likely to do most peo­ple any good at all. I think we have no good rea­son to be­lieve that we’re gen­er­ally cu­ri­ous about the right sorts of things.

And there seems to be a deeper prob­lem here too. Some things about which we’re cu­ri­ous might just not be very know­able. I can study an­cient his­tory all I like, but there’s just a limit to what we can know about what caused Pelo­pon­nesian war, not just be­cause of the tem­po­ral dis­tance or lack of record, but be­cause there’s just a lot of fun­da­men­tal in­co­her­ence to things like that. His­tory, to take one ex­am­ple, just isn’t that know­able. Cu­ri­os­ity about his­tory can be re­warded, but only a very re­strained cu­ri­os­ity.

I think this is where the idea that ‘a burn­ing itch to know is bet­ter than a vow to pur­sue the truth’: I’ve felt that burn­ing itch to know and I know from ex­pe­rience that it doesn’t of it­self dis­t­in­guish be­tween wor­thy top­ics of cu­ri­os­ity and un­wor­thy ones. A vow, at least, already has the idea of se­ri­ous­ness and pur­pose­ful­ness built into it.

• ...it will make you light and ea­ger, and give pur­pose to your ques­tion­ing and di­rec­tion to your skills.

And this ar­ti­cle rekin­dled that for me. I have a mo­ti­va­tion to ex­plore I have not felt in quite some time. Thanks for writ­ing this, Luke!

• If you have be­liefs about the mat­ter already, push the “re­set” but­ton and erase that part of your map. You must feel that you don’t already know the an­swer.

It seems like a bad idea to in­ten­tion­ally blank part of your map. If you already know things, you shouldn’t for­get what you already know. On the other hand, if you have rea­son to doubt what you think you know, you should blank the sus­pect parts of your map when you had rea­son to doubt them, and not ar­tifi­cially as part of a pro­ce­dure for gen­er­at­ing cu­ri­os­ity.

I think what you may be try­ing to say is that it is good prac­tice to pe­ri­od­i­cally re­think what you think you know, and make sure that A) you re­mem­ber how you came to be­lieve what you be­lieve, and B) your con­clu­sions still make sense in light of cur­rent ev­i­dence. How­ever, when you do this, it is im­por­tant not to get into the habit of quickly re­turn­ing to the same con­clu­sions for the same rea­sons. If you never change your con­clu­sions while re­think­ing them, that’s prob­a­bly a sign that you are too re­sis­tant to chang­ing your mind.

• This is all good stuff, but it makes cu­ri­os­ity sound com­pli­cated. I thought that the point of us­ing cu­ri­os­ity as a hook into epistemic ra­tio­nal­ity is that once you feel the emo­tion of cu­ri­os­ity, your brain of­ten just knows what to do next.

Also cu­ri­os­ity feels good.

• Cu­ri­os­ity in it­self isn’t nec­es­sar­ily com­pli­cated, and yes it feels good, but a lot of times, for a lot of peo­ple, it doesn’t hap­pen by it­self. And it sounds like the pro­cess of pro­duc­ing cu­ri­os­ity in one­self is more com­pli­cated than sim­ply feel­ing it nat­u­rally.

• Bug re­port: step 2, ex­er­cise 2.1. If the con­se­quences of my cur­rent best guess be­ing wrong are much less dire than the con­se­quences of be­ing wrong on re­com­put­ing, my so­cial cir­cle thinks that the plan based on this cur­rent best guess is very im­por­tant, and I hate the peo­ple who dis­agree, then I’m ter­rified of try­ing to re­com­pute.

• Peo­ple try very hard to ig­nore the con­se­quences of be­ing wrong. Fear in this case is dan­ger­ous, be­cause cause stag­na­tion and break cu­ri­os­ity.

• My father was in the Korean war, on the pen­in­sula.

He did not have ac­cess to but­ter or milk for some­thing like 9 months.

When he got R & R to Tokyo he ate a pound of but­ter with a knife and fork.

I should note that while I don’t know how fast he could do math in his head he could count/​re­mem­ber cards like no­body’s busi­ness. Also he died of a mas­sive coro­nary at 64 weigh­ing close to 290 pounds.

• Are you im­ply­ing that there is a causal link be­tween his con­sump­tion of but­ter and his weight gain?

• Bah. It looks like an eariler, much more de­tailed and fun­nier re­ply got eaten by some­thing.

But to an­swer, no, I don’t think speci­fi­cally and nar­rowly his but­ter eat­ing lead to his rather large size, but rather his eat­ing of al­most ev­ery­thing that would taste good, and in quan­tities that were some­times mod­er­ately im­pres­sive.

Given how much he ate and smoked, and how lit­tle he moved it’s a won­der he wasn’t twice as big and that he lived as long as he did.