# Checklist of Rationality Habits

As you may know, the Cen­ter for Ap­plied Ra­tion­al­ity has run sev­eral work­shops, each teach­ing con­tent similar to that in the core se­quences, but made more prac­ti­cal, and more into fine-grained habits.
Below is the check­list of ra­tio­nal­ity habits we have been us­ing in the mini­camps’ open­ing ses­sion. It was co-writ­ten by Eliezer, my­self, and a num­ber of oth­ers at CFAR. As men­tioned be­low, the goal is not to as­sess how “ra­tio­nal” you are, but, rather, to de­velop a per­sonal shop­ping list of habits to con­sider de­vel­op­ing. We gen­er­ated it by ask­ing our­selves, not what ra­tio­nal­ity con­tent it’s use­ful to un­der­stand, but what ra­tio­nal­ity-re­lated ac­tions (or think­ing habits) it’s use­ful to ac­tu­ally do.
I hope you find it use­ful; I cer­tainly have. Com­ments and sug­ges­tions are most wel­come; it re­mains a work in progress. (It’s also available as a pdf.)
---
This check­list is meant for your per­sonal use so you can have a wish-list of ra­tio­nal­ity habits, and so that you can see if you’re ac­quiring good habits over the next year—it’s not meant to be a way to get a ‘how ra­tio­nal are you?’ score, but, rather, a way to no­tice spe­cific habits you might want to de­velop. For each item, you might ask your­self: did you last use this habit...
• Never

• To­day/​yesterday

• Last week

• Last month

• Last year

• Be­fore the last year

1. Re­act­ing to ev­i­dence /​ sur­prises /​ ar­gu­ments you haven’t heard be­fore; flag­ging be­liefs for ex­am­i­na­tion.

1. When I see some­thing odd—some­thing that doesn’t fit with what I’d or­di­nar­ily ex­pect, given my other be­liefs—I suc­cess­fully no­tice, pro­mote it to con­scious at­ten­tion and think “I no­tice that I am con­fused” or some equiv­a­lent thereof. (Ex­am­ple: You think that your flight is sched­uled to de­part on Thurs­day. On Tues­day, you get an email from Trav­e­loc­ity ad­vis­ing you to pre­pare for your flight “to­mor­row”, which seems wrong. Do you suc­cess­fully raise this anomaly to the level of con­scious at­ten­tion? (Based on the ex­pe­rience of an ac­tual LWer who failed to no­tice con­fu­sion at this point and missed their plane flight.)

2. When some­body says some­thing that isn’t quite clear enough for me to vi­su­al­ize, I no­tice this and ask for ex­am­ples. (Re­cent ex­am­ple from Eliezer: A math­e­mat­ics stu­dent said they were study­ing “stacks”. I asked for an ex­am­ple of a stack. They said that the in­te­gers could form a stack. I asked for an ex­am­ple of some­thing that was not a stack.) (Re­cent ex­am­ple from Anna: Cat said that her boyfriend was very com­pet­i­tive. I asked her for an ex­am­ple of “very com­pet­i­tive.” She said that when he’s driv­ing and the per­son next to him revs their en­g­ine, he must be the one to leave the in­ter­sec­tion first—and when he’s the pas­sen­ger he gets mad at the driver when they don’t re­act similarly.)

3. I no­tice when my mind is ar­gu­ing for a side (in­stead of eval­u­at­ing which side to choose), and flag this as an er­ror mode. (Re­cent ex­am­ple from Anna: No­ticed my­self ex­plain­ing to my­self why out­sourc­ing my clothes shop­ping does make sense, rather than eval­u­at­ing whether to do it.)

4. I no­tice my mind flinch­ing away from a thought; and when I no­tice, I flag that area as re­quiring more de­liber­ate ex­plo­ra­tion. (Re­cent ex­am­ple from Anna: I have a failure mode where, when I feel so­cially un­com­fortable, I try to make oth­ers feel mis­taken so that I will feel less vuln­er­a­ble. Pul­ling this thought into words re­quired re­peated con­scious effort, as my mind kept want­ing to just drop the sub­ject.)

5. I con­sciously at­tempt to wel­come bad news, or at least not push it away. (Re­cent ex­am­ple from Eliezer: At a brain­storm­ing ses­sion for fu­ture Sin­gu­lar­ity Sum­mits, one is­sue raised was that we hadn’t re­ally been ask­ing for money at pre­vi­ous ones. My brain was offer­ing re­sis­tance, so I ap­plied the “bad news is good news” pat­tern to rephrase this as, “This point doesn’t change the fixed amount of money we raised in past years, so it is good news be­cause it im­plies that we can fix the strat­egy and do bet­ter next year.”)

2. Ques­tion­ing and an­a­lyz­ing be­liefs (af­ter they come to your at­ten­tion).

1. I no­tice when I’m not be­ing cu­ri­ous. (Re­cent ex­am­ple from Anna: When­ever some­one crit­i­cizes me, I usu­ally find my­self think­ing defen­sively at first, and have to vi­su­al­ize the world in which the crit­i­cism is true, and the world in which it’s false, to con­vince my­self that I ac­tu­ally want to know. For ex­am­ple, some­one crit­i­cized us for pro­vid­ing in­ad­e­quate prior info on what statis­tics we’d gather for the Ra­tion­al­ity Mini­camp; and I had to vi­su­al­ize the con­se­quences of [ex­plain­ing to my­self, in­ter­nally, why I couldn’t have done any bet­ter given ev­ery­thing else I had to do], vs. the pos­si­ble con­se­quences of [vi­su­al­iz­ing how it might’ve been done bet­ter, so as to up­date my ac­tion-pat­terns for next time], to snap my brain out of defen­sive-mode and into should-we-do-that-differ­ently mode.)

2. I look for the ac­tual, his­tor­i­cal causes of my be­liefs, emo­tions, and habits; and when do­ing so, I can sup­press my mind’s search for jus­tifi­ca­tions, or set aside jus­tifi­ca­tions that weren’t the ac­tual, his­tor­i­cal causes of my thoughts. (Re­cent ex­am­ple from Anna: When it turned out that we couldn’t rent the Mini­camp lo­ca­tion I thought I was go­ing to get, I found lots and lots of rea­sons to blame the per­son who was sup­posed to get it; but re­al­ized that most of my emo­tion came from the fear of be­ing blamed my­self for a cost over­run.)

3. I try to think of a con­crete ex­am­ple that I can use to fol­low ab­stract ar­gu­ments or proof steps. (Clas­sic ex­am­ple: Richard Feyn­man be­ing dis­turbed that Brazilian physics stu­dents didn’t know that a “ma­te­rial with an in­dex” meant a ma­te­rial such as wa­ter. If some­one talks about a proof over all in­te­gers, do you try it with the num­ber 17? If your thoughts are cir­cling around your room­mate be­ing messy, do you try check­ing your rea­son­ing against the speci­fics of a par­tic­u­lar oc­ca­sion when they were messy?)

4. When I’m try­ing to dis­t­in­guish be­tween two (or more) hy­pothe­ses us­ing a piece of ev­i­dence, I vi­su­al­ize the world where hy­poth­e­sis #1 holds, and try to con­sider the prior prob­a­bil­ity I’d have as­signed to the ev­i­dence in that world, then vi­su­al­ize the world where hy­poth­e­sis #2 holds; and see if the ev­i­dence seems more likely or more speci­fi­cally pre­dicted in one world than the other (His­tor­i­cal ex­am­ple: Dur­ing the Amanda Knox mur­der case, af­ter many hours of po­lice in­ter­ro­ga­tion, Amanda Knox turned some cartwheels in her cell. The pros­e­cu­tor ar­gued that she was cel­e­brat­ing the mur­der. Would you, con­fronted with this ar­gu­ment, try to come up with a way to make the same ev­i­dence fit her in­no­cence? Or would you first try vi­su­al­iz­ing an in­no­cent de­tainee, then a guilty de­tainee, to ask with what fre­quency you think such peo­ple turn cartwheels dur­ing de­ten­tion, to see if the like­li­hoods were skewed in one di­rec­tion or the other?)

5. I try to con­sciously as­sess prior prob­a­bil­ities and com­pare them to the ap­par­ent strength of ev­i­dence. (Re­cent ex­am­ple from Eliezer: Used it in a con­ver­sa­tion about ap­par­ent ev­i­dence for para­psy­chol­ogy, say­ing that for this I wanted p < 0.0001, like they use in physics, rather than p < 0.05, be­fore I started pay­ing at­ten­tion at all.)

6. When I en­counter ev­i­dence that’s in­suffi­cient to make me “change my mind” (sub­stan­tially change be­liefs/​poli­cies), but is still more likely to oc­cur in world X than world Y, I try to up­date my prob­a­bil­ities at least a lit­tle. (Re­cent ex­am­ple from Anna: Real­ized I should some­what up­date my be­liefs about be­ing a good driver af­ter some­one else knocked off my side mir­ror, even though it was legally and prob­a­bly ac­tu­ally their fault—even so, the ac­ci­dent is still more likely to oc­cur in wor­lds where my bad-driver pa­ram­e­ter is higher.)

3. Han­dling in­ner con­flicts; when differ­ent parts of you are pul­ling in differ­ent di­rec­tions, you want differ­ent things that seem in­com­pat­i­ble; re­sponses to stress.
1. I no­tice when I and my brain seem to be­lieve differ­ent things (a be­lief-vs-an­ti­ci­pa­tion di­ver­gence), and when this hap­pens I pause and ask which of us is right. (Re­cent ex­am­ple from Anna: Jump­ing off the Strato­sphere Ho­tel in Las Ve­gas in a wire-guided fall. I knew it was safe based on 40,000 data points of peo­ple do­ing it with­out sig­nifi­cant in­jury, but to per­suade my brain I had to vi­su­al­ize 2 times the pop­u­la­tion of my col­lege jump­ing off and sur­viv­ing. Also, my brain some­times seems much more pes­simistic, es­pe­cially about so­cial things, than I am, and is al­most always wrong.)

2. When fac­ing a difficult de­ci­sion, I try to re­frame it in a way that will re­duce, or at least switch around, the bi­ases that might be in­fluenc­ing it. (Re­cent ex­am­ple from Anna’s brother: Try­ing to de­cide whether to move to Sili­con Valley and look for a higher-pay­ing pro­gram­ming job, he tried a re­frame to avoid the sta­tus quo bias: If he was liv­ing in Sili­con Valley already, would he ac­cept a \$70K pay cut to move to Santa Bar­bara with his col­lege friends? (An­swer: No.))

3. When fac­ing a difficult de­ci­sion, I check which con­sid­er­a­tions are con­se­quen­tial­ist—which con­sid­er­a­tions are ac­tu­ally about fu­ture con­se­quences. (Re­cent ex­am­ple from Eliezer: I bought a \$1400 mat­tress in my quest for sleep, over the In­ter­net hence much cheaper than the mat­tress I tried in the store, but non-re­turn­able. When the new mat­tress didn’t seem to work too well once I ac­tu­ally tried sleep­ing nights on it, this was mak­ing me re­luc­tant to spend even more money try­ing an­other mat­tress. I re­minded my­self that the \$1400 was a sunk cost rather than a fu­ture con­se­quence, and didn’t change the im­por­tance and scope of fu­ture bet­ter sleep at stake (oc­cur­ring once per day and a large effect size each day).

4. What you do when you find your thoughts, or an ar­gu­ment, go­ing in cir­cles or not get­ting any­where.

1. I try to find a con­crete pre­dic­tion that the differ­ent be­liefs, or differ­ent peo­ple, definitely dis­agree about, just to make sure the dis­agree­ment is real/​em­piri­cal. (Re­cent ex­am­ple from Michael Smith: Some­one was wor­ried that ra­tio­nal­ity train­ing might be “fake”, and I asked if they could think of a par­tic­u­lar pre­dic­tion they’d make about the re­sults of run­ning the ra­tio­nal­ity units, that was differ­ent from mine, given that it was “fake”.)

2. I try to come up with an ex­per­i­men­tal test, whose pos­si­ble re­sults would ei­ther satisfy me (if it’s an in­ter­nal ar­gu­ment) or that my friends can agree on (if it’s a group dis­cus­sion). (This is how we set­tled the run­ning ar­gu­ment over what to call the Cen­ter for Ap­plied Ra­tion­al­ity—Ju­lia went out and tested al­ter­nate names on around 120 peo­ple.)

3. If I find my thoughts cir­cling around a par­tic­u­lar word, I try to taboo the word, i.e., think with­out us­ing that word or any of its syn­onyms or equiv­a­lent con­cepts. (E.g. won­der­ing whether you’re “smart enough”, whether your part­ner is “in­con­sid­er­ate”, or if you’re “try­ing to do the right thing”.) (Re­cent ex­am­ple from Anna: Ad­vised some­one to stop spend­ing so much time won­der­ing if they or other peo­ple were jus­tified; was told that they were try­ing to do the right thing; and asked them to taboo the word ‘try­ing’ and talk about how their thought-pat­terns were ac­tu­ally be­hav­ing.)

5. Notic­ing and flag­ging be­hav­iors (habits, strate­gies) for re­view and re­vi­sion.

1. I con­sciously think about in­for­ma­tion-value when de­cid­ing whether to try some­thing new, or in­ves­ti­gate some­thing that I’m doubt­ful about. (Re­cent ex­am­ple from Eliezer: Order­ing a \$20 ex­er­cise ball to see if sit­ting on it would im­prove my alert­ness and/​or back mus­cle strain.) (Non-re­cent ex­am­ple from Eliezer: After sev­eral months of pro­cras­ti­na­tion, and due to Anna nag­ging me about the value of in­for­ma­tion, fi­nally try­ing out what hap­pens when I write with a paired part­ner; and find­ing that my writ­ing pro­duc­tivity went up by a fac­tor of four, liter­ally, mea­sured in words per day.)

2. I quan­tify con­se­quences—how of­ten, how long, how in­tense. (Re­cent ex­am­ple from Anna: When we had Ju­lia take on the task of figur­ing out the Cen­ter’s name, I wor­ried that a cer­tain per­son would be offended by not be­ing in con­trol of the loop, and had to con­sciously eval­u­ate how im­prob­a­ble this was, how lit­tle he’d prob­a­bly be offended, and how short the offense would prob­a­bly last, to get my brain to stop wor­ry­ing.) (Plus 3 real cases we’ve ob­served in the last year: Some­one switch­ing ca­reers is afraid of what a par­ent will think, and has to con­sciously eval­u­ate how much emo­tional pain the par­ent will ex­pe­rience, for how long be­fore they ac­cli­mate, to re­al­ize that this shouldn’t be a dom­i­nant con­sid­er­a­tion.)

6. Re­vis­ing strate­gies, form­ing new habits, im­ple­ment­ing new be­hav­ior pat­terns.

1. I no­tice when some­thing is nega­tively re­in­forc­ing a be­hav­ior I want to re­peat. (Re­cent ex­am­ple from Anna: I no­ticed that ev­ery time I hit ‘Send’ on an email, I was vi­su­al­iz­ing all the ways the re­cip­i­ent might re­spond poorly or some­thing else might go wrong, nega­tively re­in­forc­ing the be­hav­ior of send­ing emails. I’ve (a) stopped do­ing that (b) in­stalled a habit of smil­ing each time I hit ‘Send’ (which pro­vides my brain a jolt of pos­i­tive re­in­force­ment). This has re­sulted in strongly re­duced pro­cras­ti­na­tion about emails.)

2. I talk to my friends or de­liber­ately use other so­cial com­mit­ment mechanisms on my­self. (Re­cent ex­am­ple from Anna: Us­ing grapefruit juice to keep up brain glu­cose, I had some juice left over when work was done. I looked at Michael Smith and jok­ingly said, “But if I don’t drink this now, it will have been wasted!” to pre­vent the sunk cost fal­lacy.) (Ex­am­ple from Eliezer: When I was hav­ing trou­ble get­ting to sleep, I (a) talked to Anna about the dumb rea­son­ing my brain was us­ing for stay­ing up later, and (b) set up a sys­tem with Luke where I put a + in my daily work log ev­ery night I show­ered by my tar­get time for get­ting to sleep on sched­ule, and a — ev­ery time I didn’t.)

3. To es­tab­lish a new habit, I re­ward my in­ner pi­geon for ex­e­cut­ing the habit. (Ex­am­ple from Eliezer: Mul­ti­ple ob­servers re­ported a long-term in­crease in my warmth /​ nice­ness sev­eral months af­ter… 3 re­peats of 4-hour writ­ing ses­sions dur­ing which, in pass­ing, I was re­warded with an M&M (and smiles) each time I com­pli­mented some­one, i.e., re­mem­bered to say out loud a nice thing I thought.) (Re­cent ex­am­ple from Anna: Yes­ter­day I re­warded my­self us­ing a smile and happy ges­ture for notic­ing that I was do­ing a string of low-pri­or­ity tasks with­out do­ing the metacog­ni­tion for putting the top pri­ori­ties on top. Notic­ing a mis­take is a good habit, which I’ve been train­ing my­self to re­ward, in­stead of just feel­ing bad.)

4. I try not to treat my­self as if I have magic free will; I try to set up in­fluences (habits, situ­a­tions, etc.) on the way I be­have, not just rely on my will to make it so. (Ex­am­ple from Ali­corn: I avoid learn­ing poli­ti­ci­ans’ po­si­tions on gun con­trol, be­cause I have strong emo­tional re­ac­tions to the sub­ject which I don’t en­dorse.) (Re­cent ex­am­ple from Anna: I bribed Carl to get me to write in my jour­nal ev­ery night.)

5. I use the out­side view on my­self. (Re­cent ex­am­ple from Anna: I like to call my par­ents once per week, but hadn’t done it in a cou­ple of weeks. My brain said, “I shouldn’t call now be­cause I’m busy to­day.” My other brain replied, “Out­side view, is this re­ally an un­usu­ally busy day and will we ac­tu­ally be less busy to­mor­row?”)

• This may be the sin­gle most use­ful thing I’ve ever read on LessWrong. Thank you very, very much for post­ing it.

Here’s one I use all the time: When a prob­lem seems over­whelming, break it up into man­age­able sub­prob­lems.

Often, when I am pro­cras­ti­nat­ing, I find that the source of my pro­cras­ti­na­tion is a feel­ing of be­ing over­whelmed. In par­tic­u­lar, I don’t know where to be­gin on a task, or I do but the task feels like a huge ob­sta­cle tow­er­ing over me. So when I think about the task, I feel a crush­ing sense of be­ing over­whelmed; the way I es­cape this feel­ing is by pro­cras­ti­na­tion (i.e. avoid­ing the source of the feel­ing al­to­gether).

When I no­tice my­self do­ing this, I try to break the prob­lem down into a se­quence of high-level sub­taks, usu­ally in the form of a to-do list. Emo­tion­ally/​metaphor­i­cally, in­stead of hav­ing to cross the ob­sta­cle in one gi­ant leap, I can climb a lad­der over it, one step at a time. (If the sub­tasks con­tinue to be in­timi­dat­ing, I just ap­ply this solu­tion re­cur­sively, mak­ing lists of sub­sub­tasks.)

I picked this strat­egy up af­ter re­al­iz­ing that the way I ap­proached large pro­gram­ming pro­jects (write the main func­tion, then write each of the sub­rou­tines that it calls, etc.) could be ap­plied to life in gen­eral. Now I’m about to ap­ply it to the task of writ­ing an NSF fel­low­ship ap­pli­ca­tion. =)

• Here’s one I use all the time: When a prob­lem seems over­whelming, break it up into man­age­able sub­prob­lems.

It’s a clas­sic self-help tech­nique (es­pe­cially in ‘Get­ting Things Done’) for a rea­son: it works.

• For the slightly more ad­vanced pro­cras­ti­na­tor that also finds a large se­quence of tasks daunt­ing, it might help to in­stead search for the first few tasks and then ig­nore the rest for now. Of course, some­times in or­der to find the first tasks you may need to break down the whole task, but other times you don’t.

• Hello! I am pro­cras­ti­nat­ing on writ­ing the NSF fel­low­ship! High five!

My cur­rent sub­prob­lem con­sists of filling in all the in­stances of “INSPIRATIONAL STUFF” with ac­tual in­spira­tional stuff, so this par­tic­u­lar sub­prob­lem is look­ing pretty difficult. :(

• Well your task spec is bro­ken, so no won­der your brain won’t be whipped into do­ing it.

“in­spira­tional stuff” is a trig­ger for think­ing in terms of things like ad­ver­tis­ing or re­li­gious re­vivals that are emo­tional grabs which are in­tended to dis­en­gage (or even flim­flam) the rea­son­ing fac­ul­ties. Any ra­tio­nal­ist would flinch away.

Re-frame: vi­su­al­ize your au­di­ence. You are look­ing to sim­ply and clearly con­vey what­ever part of their far mode util­ity func­tion is ad­vanced by the thing you are push­ing.

• Here’s one I use all the time: When a prob­lem seems over­whelming, break it up into man­age­able sub­prob­lems.

This ar­ti­cle would prob­a­bly benefit from be­ing re-read in smaller chunks over the course of sev­eral days. There are a lot of things in it that need to be thought about se­ri­ously in or­der to be effec­tive, and I agree with you about its use­ful­ness.

• When I no­tice my­self do­ing this, I try to break the prob­lem down into a se­quence of high-level sub­taks, usu­ally in the form of a to-do list. Emo­tion­ally/​metaphor­i­cally, in­stead of hav­ing to cross the ob­sta­cle in one gi­ant leap, I can climb a lad­der over it, one step at a time. (If the sub­tasks con­tinue to be in­timi­dat­ing, I just ap­ply this solu­tion re­cur­sively, mak­ing lists of sub­sub­tasks.)

I think the most im­por­tant as­pect of this, for me any­way, is be­ing able to dump most of what you’re work­ing on out of your work­ing mem­ory, trust­ing your­self that it’s or­ga­nized on pa­per, so that you can free up more brain space to do each of the sub-parts.

• Very nice list! I feel like this one in par­tic­u­lar is one of the most im­por­tant ones:

I try not to treat my­self as if I have magic free will; I try to set up in­fluences (habits, situ­a­tions, etc.) on the way I be­have, not just rely on my will to make it so. (Ex­am­ple from Ali­corn: I avoid learn­ing poli­ti­ci­ans’ po­si­tions on gun con­trol, be­cause I have strong emo­tional re­ac­tions to the sub­ject which I don’t en­dorse.) (Re­cent ex­am­ple from Anna: I bribed Carl to get me to write in my jour­nal ev­ery night.)

To give my own ex­am­ple: I try to be veg­e­tar­ian, but oc­ca­sion­ally the temp­ta­tion of meat gets the bet­ter of me. At some point I re­al­ized that when­ever I walked past a cer­tain ham­burger place—which was some­thing that I typ­i­cally did on each work­ing day—there was a high risk of me suc­cumb­ing. Ob­vi­ous solu­tion: mod­ify my daily rou­tine to take a slightly longer route which avoided any ham­burger places. Mod­ify­ing your en­vi­ron­ment so that you can com­pletely avoid the need to use willpower is ridicu­lously use­ful.

• Mod­ify­ing your en­vi­ron­ment so that you can com­pletely avoid the need to use willpower is ridicu­lously use­ful.

My per­sonal ex­am­ple: ar­rang­ing to go ex­er­cise on the way to or from some­where else will dras­ti­cally in­crease the prob­a­bil­ity that I’ll ac­tu­ally go. There’s a pool a 5 minute bike ride from my house, which is also on the way home from most of the places I would be bik­ing from. Even though the ex­tra 10 min­utes round trip is pretty neg­li­gable (and counts as ex­er­cise it­self), I’m prob­a­bly 2x as likely to go if I have my swim stuff with me and stop off on the way home. The effect is even more dras­tic for my taek­wondo class: it’s a 45 minute bike ride from home and about a 15 minute bike ride from the cam­pus where I have most of my classes. Even if I finish class at 3:30 pm and taek­wondo is at 7 pm, it still makes more sense for me to stay on cam­pus for the in­terim–if I do, there’s nearly 100% like­li­hood that I’ll make it to taek­wondo, but if I go home and get comfy, that drops to less than 50%.

• For me this was the biggest in­sight that dra­mat­i­cally im­proved my abil­ity to form habits. I don’t ac­tu­ally de­cide things most of the time. Agency is some­thing that only oc­curs in­ter­mit­tently. There­fore I use my agency on chang­ing what sorts of things I am sur­rounded by rather than on the tasks them­selves. This works be­cause the de­fault state is to sim­ply be the av­er­age of what I am sur­rounded by.

Cliche ex­am­ple: not hav­ing junk food in the house im­proves my diet by mak­ing it take ad­di­tional work to go out and get it.

• Another ex­am­ple: as I don’t feel like get­ting in a re­la­tion­ship for the fore­see­able fu­ture, I try to avoid cir­cum­stances with lots of pretty girls around, e.g. not go­ing to cer­tain par­ties, tak­ing walks in those parts of the for­est where I don’t ex­pect to meet any, and in gen­eral, try­ing to con­vince other parts of my brain that the only girl I could pos­si­bly be with ex­ists some­where in the dis­tant fu­ture or not at all (if she can’t do a spell or two and talk to drag­ons, she won’t do ;-)).

It also helps be­ing fo­cused on math, pro­gram­ming and ab­stract philos­o­phy—and spend­ing time on LW, it seems. :)

• I dis­agree with the com­menters be­low—I think you’re fairly likely to find your­self want­ing to be in a re­la­tion­ship if you’re not care­ful. I’m a fe­male, and I don’t want to get mar­ried or have kids. Un­for­tu­nately, I’m 24, and some part of me/​the body is re­ally try­ing to marry me off and give me bay­behs. So I try not to take in too much me­dia that nor­mal­izes this vs. nor­mal­iz­ing my goals, I don’t babysit, and I am open about my in­tent so as not to at­tract in­vi­ta­tions.

• I don’t think you’d be likely to find your­self in a re­la­tion­ship de­spite not want­ing to by go­ing to par­ties with lots of pretty girls around, let alone by walk­ing on a street where girls also walk rather than through a for­est. And not de­vel­op­ing so­cial skills may make things much harder should you ever de­cide to try and get into a re­la­tion­ship later in your life.

• Aha, but the clever ar­guer could re­spond that you could be likely to find your­self want­ing to de­spite not want­ing to want to be in a re­la­tion­ship, and thus that avoidance is a twice-effec­tive method of willpower con­ser­va­tion!

Of course, that the above be true and ap­pli­ca­ble to this case is un­likely. If you’re to end up want­ing it, and that you’ll end up want­ing it enough to com­pen­sate for the op­por­tu­nity costs re­gard­ing other things you might want in­curred by even­tual willpower ex­penses or time spent “suc­cumb­ing” and at­tempt­ing to get into a re­la­tion­ship, then I think it triv­ially fol­lows that you should already have up­dated to­wards the more re­flec­tively co­her­ent be­hav­ior that seems to give higher ex­pected util­ity. After all, we want to win.

• It’s the “Lead me not into temp­ta­tion, but de­liver me from wee­vils!” tac­tic. Well . . . maybe not wee­vils, but not evil ei­ther, in this case.

Your ob­jec­tion to the ul­ti­mate util­ity of avoidance doesn’t seem to take the de­sire to avoid dis­trac­tion and wasted time even when suc­cess­fully re­sist­ing the biolog­i­cal urges to­ward re­la­tion­ship-es­tab­lish­ing be­hav­ior into ac­count. Even if you (for some non­spe­cific defi­ni­tion of “you”) sim­ply find your­self way­laid for a few min­utes by a pretty girl, but ul­ti­mately ready to move on, the time spent not only in those few mo­ments but also in think­ing about it later on may prove a dis­trac­tion from other things, re­gard­less of whether you al­low your­self to get caught up enough to ac­tively pur­sue a re­la­tion­ship with her.

• Well, yeah, my ob­jec­tion does take it into ac­count, but I was be­ing un­fair in my im­plicit as­sump­tions be­cause I didn’t think it likely that any­one here would ob­ject.

If you’re to end up want­ing it, and that you’ll end up want­ing it enough to com­pen­sate for the op­por­tu­nity costs re­gard­ing other things (...)

Ba­si­cally, this is where I lumped an im­plicit: “For most hu­mans, the de­sire and ex­pected benefits of suc­cess­fully en­ter­ing a re­la­tion­ship are much greater in terms of evolved val­ues than the op­por­tu­nity costs in­curred, and it is rea­son­able to ex­pect that the gains ob­tained from this would free up enough men­tal re­sources to ac­tu­ally make faster, rather than slower, progress on other goals of in­ter­est in the case of well-mo­ti­vated in­di­vi­d­u­als with above-av­er­age in­stru­men­tal ra­tio­nal­ity.”

How­ever, es­ti­mat­ing the costs you men­tioned for hu­mans-on-av­er­age is difficult for me, due to lack of data. Pic­ture me as wear­ing a “typ­i­cal mind fal­lacy warn­ing!” badge on this par­tic­u­lar is­sue.

• Well, it has hap­pened to me be­fore—girls re­ally can be pretty in­sis­tent. :) But this is not ac­tu­ally what con­cerns me—it’s the dis­trac­tion/​wasted time in­duced by pretty-girl-con­tact event like apotheon ex­plained be­low.

• Set Fu­ture You up for suc­cess, rather than failure.

Edit: Thought of a per­sonal ex­am­ple. I know that if I scratch my head, my head will be­come more itchy. It is a vi­cious cy­cle. If I cut my nails short, it seems to help. In the mo­ment, I might not want to cut my nails be­cause there is no im­me­di­ate value. But it is, in a sense, “mod­ify­ing my en­vi­ron­ment” so that in the fu­ture I’ll be less likely to fall into the itchy-head trap.

• Awe­some list. I’m in­ter­ested in the way there are 24 ques­tions that are grouped into 6 over­ar­ch­ing cat­e­gories. Do they em­piri­cally cluster like this in ac­tual hu­mans? It would be fas­ci­nat­ing to get a few hun­dred re­sponses to each ques­tion and do di­men­sional anal­y­sis to see if there is a small num­ber of com­mon core is­sues that can be com­mu­ni­cated and/​or ad­justed more effi­ciently :-)

• I’d like to add “notic­ing when you don’t know some­thing.” When some­one asks you a ques­tion, its sur­pris­ingly tempt­ing to try to be helpful and offer them an an­swer even when you don’t have the nec­es­sary knowl­edge to provide an ac­cu­rate an­swer. It can be easy to in­fer what the truth might be and offer that as an an­swer, with­out ex­plain­ing that you’re just guess­ing and don’t ac­tu­ally know. (Ex­am­ple: I re­cently pur­chased a new tele­vi­sion and my co-worker asked me what sort of Parental Con­trols it offered. I im­me­di­ately started pro­vid­ing him an an­swer I had in­ferred from limited knowl­edge, and it took me a mo­ment to re­al­ize I didn’t ac­tu­ally know what I was talk­ing about and in­stead tell him, “I don’t know.”)

This is es­sen­tially the prob­lem of con­fab­u­la­tion men­tioned here; in this case its a con­fab­u­la­tion of knowl­edge about the world, as op­posed to con­fab­u­lat­ing knowl­edge about the self. In terms of the map/​ter­ri­tory anal­ogy, this would be a situ­a­tion where some­one asks you a ques­tion about a spe­cific area of your map, and you choose to an­swer as if that sec­tion of your map is perfectly clear to you, even when you know that its blurry. Don’t treat a blurry map as if it were clear!

• Good one. I try to be very con­ser­va­tive with my lan­guage & pref­ace ev­ery­thing I say with some­thing that im­plies an amount of un­cer­tainty.

There might be cul­tural differ­ences. In China peo­ple will give you di­rec­tions on the street even if they have no idea. I have yet to have some­one re­ply to a re­quest for help with “I don’t know”.

It seems like an Ego pro­tec­tion thing to me & it isn’t helpful.

• I like your com­ment, but one prob­lem is that tel­ling peo­ple you don’t know stuff pro­jects low sta­tus. I think most peo­ple, in­clud­ing me, re­ally know very lit­tle, but if you’re hon­est about this all the time then this can con­tribute to per­sis­tent low sta­tus. (I tried the “don’t care about sta­tus” thing for a while, but be­ing near the bot­tom of the so­cial totem pole just doesn’t seem to work for me psy­cholog­i­cally. So lately I’ve de­cided to op­ti­mize for sta­tus ev­ery­where at least some­what.)

• I like your com­ment, but one prob­lem is that tel­ling peo­ple you don’t know stuff pro­jects low sta­tus.

That only hap­pens if it’s cred­ible, oth­er­wise it’s taken as counter-sig­nal­ling. When I say I don’t know much about some­thing, peo­ple gen­er­ally re­al­ize I’m just hold­ing my­self to a high stan­dard and don’t gen­uinely be­lieve I know less than the typ­i­cal per­son; the prob­lem is that they also think that when I ac­tu­ally don’t know shit about some­thing (in the sense the typ­i­cal per­son would use that phrase). Con­versely, show­ing off knowl­edge can come across as ar­ro­gant in cer­tain situ­a­tions.

I tried the “don’t care about sta­tus” thing for a while

Even if you don’t care about sta­tus, I’d say that what X (e.g. “I don’t know”) ac­tu­ally means in English is what English speak­ers ac­tu­ally mean when they say X, re­gard­less of et­y­mol­ogy (huh, it sounds tau­tolog­i­cal when put this way, doesn’t it?), and if you’re aware of this and use X to mean some­thing else you’re ly­ing (un­less your in­ter­locu­tor knows you mean some­thing else).

• “tel­ling peo­ple you don’t know stuff pro­jects low sta­tus”

If it’s a ran­dom stranger, I don’t care about sta­tus. If it’s a friend or a fel­low “geek”, it’s prob­a­bly a high sta­tus sig­nal to send. That pretty much leaves work as the only area I’d po­ten­tially run in to this, and I’ve found “I don’t know; but I can find out!” works won­ders (part of this is that at work, I’m pre­sum­ably ex­pected to ac­tu­ally know these things)

I’ve found “I don’t know, but isn’t it fun to find out!” is a fairly suc­cess­ful tac­tic, but I’m also de­liber­ately aiming to at­tract geeks and peo­ple who like that an­swer in my life :)

• “A physi­cist is some­one who an­swers all ques­tions with ‘I don’t know, but I can find out.’”—Some­one (pos­si­bly Ni­cola Cabibbo, quot­ing from my mem­ory)

• . If it’s a friend or a fel­low “geek”, it’s prob­a­bly a high sta­tus sig­nal to send.

Rarely. It is of­ten a use­ful sig­nal to send but sel­dom high sta­tus.

• I don’t re­ally un­der­stand the re­ply. Are you say­ing it’s rarely high sta­tus even within my so­cial cir­cles? Or are you say­ing that my so­cial cir­cles are un­usual? To the former, all I can say is that we ap­par­ently have very differ­ent ex­pe­riences. To the lat­ter… well, duh, that’s WHY I speci­fied that it was spe­cific to THOSE groups...

• Are you say­ing it’s rarely high sta­tus even within my so­cial cir­cles?

I am say­ing that is more likely that you are in­flat­ing the phrase “high sta­tus” to in­clude things that are some­what low sta­tus but over­all so­cially re­ward­ing than that your sub­cul­ture is stretched quite that far in that (un­sus­tain­able) di­rec­tion.

• How would “I don’t know” be­ing high sta­tus be un­sus­tain­able?

For that mat­ter, what dis­tinc­tion are you draw­ing be­tween high sta­tus and so­cially re­ward­ing?

• For that mat­ter, what dis­tinc­tion are you draw­ing be­tween high sta­tus and so­cially re­ward­ing?

Yes, “high sta­tus” be­ing the in­flated does seem to be the crux of the mat­ter.

So­cially re­ward­ing be­hav­iors that, cer­i­tus paribus are low sta­tus.

• Listen­ing to what some­one is say­ing. Even more if you deign to com­pre­hend and ac­cept their point.

• Salut­ing.

• Us­ing care­ful ex­pres­sion to en­sure you don’t offend peo­ple.

• My gen­eral ex­pe­rience has been that “I don’t know, but I’ll find out”, said to some­one cur­rently equal or lower sta­tus than me, clearly but mildly cor­re­lates with most of the low sta­tus be­hav­ior you men­tioned. I’m not as sure how it af­fects peo­ple higher sta­tus than me, since I don’t have as many of those re­la­tion­ships /​ data points.

So I con­tinue my as­ser­tion that, yes, it’s high sta­tus, not merely so­cially re­ward­ing. I still sus­pect this is a weird and un­usual set of ex­pe­riences, and prob­a­bly has to do with how I po­si­tion “I don’t know” rel­a­tive to oth­ers.

• In some cir­cles, per­ceived sig­nal use­ful­ness is a causal fac­tor to­wards the sig­nal’s sta­tus-level.

To un­box the above: In some groups I’ve been with, send­ing com­pressed sig­nals that ev­ery­one in the group un­der­stands is a high-sta­tus sig­nal, re­gard­less of whether it’s a “low-sta­tus” or “high-sta­tus” sig­nal in other en­vi­ron­ments.

“Hey, I have an idea but I’m not quite sure how to go about putting it in prac­tice” is a very low sta­tus sig­nal in meatspace for all meatspaces I’ve been in ex­cept one, but a very high sta­tus sig­nal in e.g. cer­tain on­line hack­ing com­mu­ni­ties.

Like­wise for the case at hand, there are places where “I don’t know” can even be the high­est sta­tus sig­nal. For the most mem­o­rable ex­am­ple, I’ve once vis­ited a church where the peo­ple at the top were an­swer­ing “I don’t know” to the most ques­tions, sig­nal­ing their close­ness to di­v­inity im­plic­itly, while the “sim­ple­tons” at the bot­tom of the lad­der had an opinion on ev­ery­thing, and thus would never “not know”.

• I’ve had peo­ple tell me to taboo “I don’t know” be­cause I use it so much. Th­ese be­ing fairly av­er­age or slightly above av­er­age peo­ple who are an­noyed that I don’t have a strong opinion about things like “what do you want to eat tonight?” Some have made jokes about putting “I don’t know” on my tomb­stone. As­sum­ing that I die and am later re­s­ur­rected and dis­cover this was ac­tu­ally done, I will be most dis­pleased.

• I usu­ally in­ter­pret that con­text as “I don’t have a prefer­ence”, which I would read­ily agree is use­ful to taboo. If you gen­uinely don’t know what you want (de­spite hav­ing an ap­par­ent hid­den but strong prefer­ence) then … that’s a new one on me ^^;

• Toss a men­tal coin and pre­tend to en­thuse about the re­sult?

• Be­fore de­clin­ing to offer an opinion, it’s worth con­sid­er­ing whether you’d benefit from the de­ci­sion be­ing made. (For in­stance, you could get a prompt din­ner.) If so, why not offer a lit­tle help? De­ci­sion mak­ing can be tiring work, and any in­put can make it eas­ier.

You could:

• men­tion any limit­ing fac­tors (i.e. I have \$20 or 1 hour)

• Men­tion op­tions that are con­ve­nient

• Offer sup­port to the per­son who makes the de­ci­sion (par­tic­u­larly if you can avoid cri­tiquing their choice).

• The ex­am­ple about stacks in 1.2 has a cer­tain irony in con­text. This re­quires a small math­e­mat­i­cal paren­these:

A stack is a cer­tain so­phis­ti­cated type of ge­o­met­ric struc­ture which is in­creas­ingly used in alge­braic ge­om­e­try, alge­braic topol­ogy (and spread­ing to some cor­ners of differ­en­tial ge­om­e­try) to make sense of ge­o­met­ric in­tu­itions and no­tions on “spaces” which oc­cur “nat­u­rally” but are squarely out of the tra­di­tional ge­o­met­ric cat­e­gories (like man­i­folds, schemes, etc.).

See www.ams.org/​​no­tices/​​200304/​​what-is.pdf for a very short in­tro­duc­tion fo­cus­ing on the ba­sic ex­am­ple of the mod­uli of el­lip­tic curves.

The up­shot of this vague out­look is that in the rele­vant fields, ev­ery­thing of in­ter­est is a stack (or a more ex­otic beast like a de­rived stack), pre­cisely be­cause the no­tion has been de­signed to be as gen­eral and flex­ible as pos­si­ble ! So ask­ing some­one work­ing on stacks a good ex­am­ple of some­thing which is not a stack is bound to cre­ate a short mo­ment of con­fu­sion.

Even if you do not care for stacks (and I wouldn’t hold it against you), if you are in­ter­ested in open source/​In­ter­net-based sci­en­tific pro­jects, it is worth hav­ing a look at the web page of the Stacks pro­ject (http://​​stacks.math.columbia.edu/​​), a col­lab­o­ra­tive fully hy­per­linked text­book on the topic, which is steadily grow­ing to­wards the 3500 pages mark.

• he tried a re­frame to avoid the sta­tus quo bias: If he was liv­ing in Sili­con Valley already, would he ac­cept a \$70K pay cut to move to Santa Bar­bara with his col­lege friends? (An­swer: No.))

 But his util­ity func­tion would pre­dictably change un­der those cir­cum­stances.

I know that I have a sta­tus quo bias, he­do­nic tread­mill, and strongly de­creas­ing marginal util­ity of money (par­tic­u­larly when pro­gres­sive tax­a­tion is fac­tored in).

If I made 23 of what I do now, I’d be pretty much as happy as I am now, and want more money; if I made 32 of what I do now (roughly the fac­tor de­scribed in the OP), I’d also be pretty much as happy as I am now, and want more money.

The log­i­cal con­clu­sion is that we should lower the weight of salary in­creases in de­ci­sions, the op­po­site of the con­clu­sion pro­posed here.

• If I made 23 of what I do now, I’d be pretty much as happy as I am now, and want more money; if I made 32 of what I do now, I’d also be pretty much as happy as I am now, and want more money.

You’re bury­ing your ar­gu­ment in the con­stants ‘pretty much’ there. You can re­peat your ar­gu­ment sorites-style af­ter you have taken the 23 salary cut: “Well, if I made 23 what I do now, I’d still be ‘pretty much as happy’ as I am now” and so on and so forth un­til you have hit sub-poverty wages.

To keep the limits of the log ar­gu­ment in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7; do you re­ally think if some­one handed you a billion dol­lars and you filled your world-fa­mous days com­pet­ing with Musk to reach Mars or some­thing in­sanely awe­some like that, you would only be twice as happy as when you were a low-sta­tus scrub-mon­key mak­ing 50k?

(par­tic­u­larly when pro­gres­sive tax­a­tion is fac­tored in).

Here again more work is nec­es­sary. One of the chief sug­ges­tions of pos­i­tive psy­chol­ogy is donat­ing more and buy­ing more fuzzies… and guess what is fa­vored by pro­gres­sive tax­a­tion? Donat­ing.

The log­i­cal con­clu­sion is that I should lower the weight of salary in­creases in de­ci­sions, the op­po­site of the con­clu­sion pro­posed here.

Of course there are peo­ple who are surely mak­ing the mis­take of over-valu­ing salaries; but you’re go­ing to need to do more work to show you’re one of them.

• To keep the limits of the log ar­gu­ment in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Com­par­ing these num­bers tells you pretty much noth­ing. First of all, tak­ing log(\$50k) is not a valid op­er­a­tion; you should only ever take logs of a di­men­sion­less quan­tity. The stan­dard solu­tion is to pick an ar­bi­trary dol­lar value \$X, and com­pare log(\$50k/​\$X), log(\$120k/​\$X), and log(\$10^9/​\$X). This is equiv­a­lent to com­par­ing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an ar­bi­trary con­stant.

This shouldn’t be a sur­prise, be­cause un­der the stan­dard defi­ni­tion, util­ity func­tions are trans­la­tion-in­var­i­ant. They are only com­pared in cases such as “is U1 bet­ter than U2?” or “is U1 bet­ter than a 5050 chance of U2 and U3?” The an­swer to this ques­tion doesn’t change if we add a con­stant to U1, U2, and U3.

In par­tic­u­lar, it’s in­valid to say “U1 is twice as good as U2”. For that mat­ter, even if you don’t like util­ity func­tions, this is sus­pi­cious in gen­eral: what does it mean to say “I would be twice as happy if I had a mil­lion dol­lars”?

It would make sense to say, if your util­ity for money is log­a­r­ith­mic and you cur­rently have \$50k, that you’re in­differ­ent be­tween a 100% chance of an ex­tra \$70k and a 8.8% chance of an ex­tra \$10^9 -- that be­ing the prob­a­bil­ity for which the ex­pected util­ities are the same. If you think log­a­r­ith­mic util­ities are bad, this is the claim you should be re­fut­ing.

• you should only ever take logs of a di­men­sion­less quantity

God­dammit I have a de­gree in math­e­mat­ics and no-one ever told me that and I never figured it out for my­self.

I see the be­gin­nings of an ex­pla­na­tion here [http://​​physics.stack­ex­change.com/​​ques­tions/​​7668/​​fun­da­men­tal-ques­tion-about-di­men­sional-anal­y­sis]. Any poin­ter to bet­ter ex­pla­na­tion?

• Tak­ing logs of a di­men­sion­ful quan­tity is pos­si­ble, if you know what you’re do­ing. (In math, we make up our own rules: no one is al­lowed to tell us what we can and can­not do. Whether or not it’s use­ful is an­other ques­tion.) Here’s the real scoop:

In physics, we only re­ally and truly care about di­men­sion­less quan­tities. Th­ese are the quan­tities which do not change when we change the sys­tem of units, i.e. they are “in­var­i­ant”. Any­thing which is not in­var­i­ant is a purely ar­bi­trary hu­man con­ven­tion, which doesn’t re­ally tell me any­thing about the world. For ex­am­ple, if I want to know if I fit through a door, I’m only in­ter­ested in the ra­tio be­tween my height and the height of the door. I don’t re­ally care about how the door com­pares to some stan­dard me­ter some­where, ex­cept as an in­ter­me­di­ate step in some calcu­la­tion.

Nev­er­the­less, for prac­ti­cal pur­poses it is con­ve­nient to also con­sider quan­tities which trans­form in a par­tic­u­larly sim­ple way un­der a change of units sys­tems. Bor­row­ing some ter­minol­ogy from gen­eral rel­a­tivity, we can say that a quan­tity X is “co­var­i­ant” if it trans­forms like X --> (unit1 /​ unit2 )^p X when we change from unit1 to unit2. Here p is a real num­ber which in­di­cates the di­men­sion of the unit. Th­ese things aren’t in­var­i­ant un­der a change of units, so we don’t care about them in a fun­da­men­tal way. But they’re ex­tremely use­ful nev­er­the­less, be­cause you can con­struct in­var­i­ant quan­tities out of co­var­i­ant ones by mul­ti­ply­ing or di­vid­ing them in such a way that the units can­cel out. (In the con­crete ex­am­ple above, this al­lows us to mea­sure the door and me sep­a­rately, and wait un­til later to com­bine the re­sults.)

Once you’re will­ing to ac­cept num­bers which de­pend on ar­bi­trary hu­man con­ven­tion, noth­ing pre­vents you from tak­ing logs or sines or what­ever of these quan­tities (in the naive way, by just punch­ing the num­ber sans units into your calcu­la­tor). What you end up with is a num­ber which de­pends in a par­tic­u­larly com­pli­cated way on your sys­tem of units. Con­cep­tu­ally, that’s not re­ally any worse. But re­mem­ber, we only care if we can find a way to con­struct in­var­i­ant quan­tities out of them. Prac­ti­cally speak­ing, our ex­prience as physi­cists is that quan­tities like this are rarely use­ful.

But there may be ex­cep­tions. And logs aren’t re­ally that bad, since as Kindly points out, you can still ex­tract in­var­i­ant quan­tities by adding them to­gether. As a work­ing physi­cist I’ve done calcu­la­tions where it was use­ful to think about logs of di­men­sion­ful quan­tities (key­words: “en­tan­gle­ment en­tropy”, “con­for­mal field the­ory”). Sines are a lot worse since they aren’t even mono­tonic func­tions: I can’t imag­ine any ap­pli­ca­tion where tak­ing the sine of a di­men­sion­ful quan­tity would be use­ful.

• I think it’d be ob­vi­ous how to take the log of a di­men­sional quan­tity.

e^(log ap­ple) = apple

• Right, but then log (2 ap­ple) = log 2 + log ap­ple and so forth. This is a perfectly sen­si­ble way to think about things as long as you (not you speci­fi­cally, but the gen­eral you) re­mem­ber that “log ap­ple” trans­forms ad­di­tively in­stead of mul­ti­plica­tively un­der a change of co­or­di­nates.

• Isn’t the ar­gu­ment to a sine by de­fault a quan­tity of an­gle, that is Ra­di­ans in SI? (I know ra­di­ans are epiphe­nom­e­nal/​w/​e, but still)

• I can’t imag­ine any ap­pli­ca­tion where tak­ing the sine of a di­men­sion­ful quan­tity would be use­ful.

Ma­chine learn­ing meth­ods will go right ahead and ap­ply what­ever col­lec­tion of func­tions they’re given in what­ever way works to get em­piri­cally ac­cu­rate pre­dic­tions from the data. E.g. add the pa­tient’s tem­per­a­ture to their pulse rate and di­vide by the cotan­gent of their age in decades, or what­ever.

So it can cer­tainly be use­ful. Whether it is mean­ingful is an­other mat­ter, and touches on this co­nun­drum again. What and whence is “un­der­stand­ing” in an AGI?

Eliezer wrote some­where about hy­po­thet­i­cally be­ing able to de­duce spe­cial rel­a­tivity from see­ing an ap­ple fall. What sort of mechanism could do that? Where might it get the idea that adding tem­per­a­ture to pulse may be use­ful for mak­ing em­piri­cal pre­dic­tions, but use­less for “un­der­stand­ing what is hap­pen­ing”, and what does that quoted phrase mean, in terms that one could pro­gram into an AGI?

• “units are a use­ful er­ror-check­ing ho­mo­mor­phism”

• I don’t think “ho­mo­mor­phism” is quite the right word here. Keep­ing track of units means keep­ing track of var­i­ous scal­ing ac­tions on the things you’re in­ter­ested in; in other words, it means keep­ing track of cer­tain sym­me­tries. The rea­son you can use this for er­ror-check­ing is that if two things are equal, then any rele­vant sym­me­tries have to act on them in the same way. But the units them­selves aren’t a ho­mo­mor­phism, they’re just a short­hand to in­di­cate that you’re work­ing with things that trans­form in some non­triv­ial way un­der some sym­me­try.

• I don’t think “ho­mo­mor­phism” is quite the right word here.

The map from di­men­sional quan­tities to units is struc­ture-pre­serv­ing, so yes, it is a ho­mo­mor­phism be­tween some­thing like rings. For ex­am­ple, all dis­tances in SI are mapped into the el­e­ment “me­ter”, and all time in­ter­vals into the el­e­ment “sec­ond”. Ad­di­tion and sub­trac­tion is triv­ial un­der the map (e.g. m+m=m), and so is mul­ti­pli­ca­tion by a di­men­sion­less quan­tity, while mul­ti­pli­ca­tion and di­vi­sion by a di­men­sional quan­tity gen­er­ates new el­e­ments (e.g. me­ter per sec­ond).

Con­vert­ing be­tween differ­ent mea­sure­ment sys­tems (e.g. SI and CGS) adds var­i­ous scale fac­tors, thus en­larg­ing the codomain of the map.

• I don’t know of any good ex­pla­na­tions; this seems rele­vant but re­quires a sub­scrip­tion to ac­cess. Un­for­tu­nately, no-one’s ever ex­plained this to me ei­ther, so I’ve had to figure it out by my­self.

What I’d add to the dis­cus­sion you linked to is that in ac­tual prac­tice, log­a­r­ithms ap­pear in equa­tions with units in them when you solve differ­en­tial equa­tions, and ul­ti­mately when you take in­te­grals. In the sim­plest case, when we’re in­te­grat­ing 1/​x, x can have any units what­so­ever. How­ever, if you have bounds A and B, you’ll get log(B) - log(A), which can be rewrit­ten as log(B/​A). There’s no way A and B can have differ­ent units, so B/​A will be di­men­sion­less.

Of course, of­ten peo­ple are sloppy and will just keep do­ing things with log(B) and log(A), even though these don’t make sense by them­selves. This is perfectly all right be­cause the logs will have to can­cel even­tu­ally. In fact, at this point, it’s even okay to drop the units on A and B, be­cause log(10 ft) - log(5 ft) and log(10 m) - log(5 m) rep­re­sent the same quan­tity.

• I don’t know of any good ex­pla­na­tions; this seems rele­vant but re­quires a sub­scrip­tion to ac­cess.

Most of that pa­per is the au­thors re­but­ting what other peo­ple have said about the is­sue, but there are two bits that try to ex­plain why one can’t take logs of di­men­sional things.

Page 68 notes that $y=\log_b x \ \textrm{if} \ x = b^y$, which “pre­cludes the as­so­ci­a­tion of any phys­i­cal di­men­sion to any of the three vari­ables b, x, and y”.

And on pages 69-70:

The rea­son for the ne­ces­sity of in­clud­ing only di­men­sion­less real num­bers in the ar­gu­ments of tran­scen­den­tal func­tion is not due to the [alleged] di­men­sional non­ho­mo­gene­ity of the Tay­lor ex­pan­sion, but rather to the lack of phys­i­cal mean­ing of in­clud­ing di­men­sions and units in the ar­gu­ments of these func­tion. This dis­tinc­tion must be clearly made to stu­dents of phys­i­cal sci­ences early in their un­der­grad­u­ate ed­u­ca­tion.

That sec­ond snip­pet is too vague for me. But I’m still think­ing about the first one.

[Edited to fix the LaTeX.]

• The (say) real sine func­tion is defined such that its do­main and codomain are (sub­sets of) the re­als. The re­als are usu­ally char­ac­ter­ized as the com­plete or­dered field. I have never come across units that—taken alone—satisfy the ax­ioms of a com­plete or­dered field, and hav­ing sev­eral units in­tro­duces prob­lems such as how we would im­pose a mean­ingful or­der. So a sine func­tion over unit-ed quan­tities is suffi­ciently non-ob­vi­ous as to re­quire a clar­ifi­ca­tion of what would be meant by sin(\$1). For ex­am­ple—switch­ing over now to log­a­r­ithms—if we treat \$1 as the real mul­ti­plica­tive iden­tity (i.e. the real num­ber, unity) unit-mul­ti­plied by the unit \$, and ex­trap­o­late one of the fun­da­men­tal prop­er­ties of log­a­r­ithms—that log(ab)=loga+logb, we find that log(\$1)=log(\$)+log(1)=log(\$) (as­sum­ing we keep that log(1)=0). How are we to in­ter­pret log(\$)? More­over, log(\$^2)=2log(\$). So if I log the square of a dol­lar, I ob­tain twice the log of a dol­lar. How are we to in­ter­pret this in the above con­text of util­ity? Or an ex­am­ple from tri­gono­met­ric func­tions: One char­ac­ter­i­za­tion of the co­sine and sine stipu­lates that cos^2+sin^2=1, so we would have that cos^2(\$1)+sin^2(\$1)=1. If this is the real unity, does this mean that the co­sine func­tion on dol­lars out­puts a real num­ber? Or if the RHS is \$1, does this mean that the co­sine func­tion on dol­lars out­puts a dol­lar^(1/​2) value? Then con­sider that dou­ble, triple, etc. an­gles in the stan­dard co­sine func­tion can be writ­ten as polyno­mi­als in the sin­gle-an­gle co­sine. How would this trans­late?

So this is a case where the ‘bur­den of mean­ingful­ness’ lies with propos­ing a mean­ingful in­ter­pre­ta­tion (which now seems rather difficult), even though at first it seems ob­vi­ous that there is a sin­gle rea­son­able way for­ward. The con­text of the func­tions needs to be con­sid­ered; the sine func­tion origi­nated with plane ge­om­e­try and was ex­tended to the re­als and then the com­plex num­bers. Each of these was mo­ti­vated by an (an­a­lytic) con­tinu­a­tion into a big­ger ‘do­main’ that fit perfectly with ex­ist­ing un­der­stand­ing of that big­ger do­main; this doesn’t seem to be the case here.

• How are we to in­ter­pret [the log­a­r­ithm of one dol­lar] in the above con­text of util­ity?

You pick an ar­bi­trary con­stant A of di­men­sion “amount of money”, and use log(x/​A) as an util­ity func­tion. Chang­ing A amounts to adding a con­stant to the util­ity (and chang­ing the base of the log­a­r­ithms amounts to mul­ti­ply­ing it by a con­stant), which doesn’t af­fect ex­pected util­ity max­i­miza­tion. EDIT: And once it’s clear that the choice of A is im­ma­te­rial, you can abuse no­ta­tion and just write “log(x)”, as Kindly says.

• You can only add, sub­tract and com­pare like quan­tities, but log(50000*1dol­lar)=log(50000)+log(1 dol­lar), which is a mean­ingless ex­pres­sion. What’s the log­a­r­ithm of a dol­lar?

• What’s the log­a­r­ithm of a dol­lar?

An ar­bi­trary ad­di­tive con­stant. See the last para­graph of Kindly’s com­ment.

• What’s the log­a­r­ithm of a dol­lar?

What do you need to “ex­ponate” to get a dol­lar?

That, what­ever that might be, is the log­a­r­ithm of a dol­lar.

• Well, we could choose fac­torise it as log(50000 dol­lars) = log(50000 dol­lar^0.5 * 1 dol­lar^0.5) = log(50000 dol­lar^0.5) + log(1 dol­lar^0.5). That does keep the units of the ad­di­tion operands the same. Now we only have to figure out what the log of a root-dol­lar is...

the log­a­r­ithm of a dollar

It’s re­ally just the same ques­tion again—why can’t I write log(1 dol­lar) = 0 (or maybe 0 dol­lar^0.5), the same as I would write log(1) = 0.

• It’s re­ally just the same ques­tion again—why can’t I write log(1 dol­lar) = 0 (or maybe 0 dol­lar^0.5), the same as I would write log(1) = 0.

\$1 = 100¢. Now try log­ging both sides by strip­ping off the cur­rency units first!

• This is equiv­a­lent to com­par­ing 10.8 + C, 11.69 + C, and 20.7 + C, where C is an ar­bi­trary con­stant.

This is what I did, with­out the pedantry of the C.

In par­tic­u­lar, it’s in­valid to say “U1 is twice as good as U2”. For that mat­ter, even if you don’t like util­ity func­tions, this is sus­pi­cious in gen­eral: what does it mean to say “I would be twice as happy if I had a mil­lion dol­lars”?

I don’t fol­low at all. How can util­ities not be com­pa­rable in terms of mul­ti­pli­ca­tion? This falls out pretty much ex­actly from your clas­sic car­di­nal util­ity func­tion! You seem to be as­sum­ing or­di­nal util­ities but I don’t see why you would talk about some­thing I did not draw on nor would ac­cept.

• This is what I did, with­out the pedantry of the C.

The point is that be­cause the con­stant is there, say­ing that util­ity grows log­a­r­ith­mi­cally in money un­der­speci­fies the ac­tual func­tion. By ig­nor­ing C, you are im­plic­itly us­ing \$1 as a point of com­par­i­son.

A gen­er­ous in­ter­pre­ta­tion of your claim would be to say that to some­one who cur­rently only has \$1, hav­ing a billion dol­lars is twice as good as hav­ing \$50000 -- in the sense, for ex­am­ple, that a 50% chance of the former is just as good as a 100% chance of the lat­ter. This doesn’t seem out­right im­plau­si­ble (hav­ing \$50000 means you jump from “starv­ing in the street” to “be­ing more fi­nan­cially se­cure than I cur­rently am”, which solves a lot of the prob­lems that the \$1 per­son has). How­ever, it’s also ir­rele­vant to some­one who is guaran­teed \$50000 in all out­comes un­der con­sid­er­a­tion.

• How­ever, it’s also ir­rele­vant to some­one who is guaran­teed \$50000 in all out­comes un­der con­sid­er­a­tion.

Then how do you sug­gest the per­son un­der dis­cus­sion eval­u­ate their work­ing pat­terns if log util­ities are only use­ful for ex­pected val­ues?

• By com­par­ing changes in util­ity as op­posed to ab­solute val­ues.

To the per­son with \$50000, a change to \$70000 would have a log util­ity of 0.336, and a change to \$1 billion would have a log util­ity of 9.903. A change to \$1 would have a log util­ity of −10.819.

• I see, thanks.

• How can util­ities not be com­pa­rable in terms of mul­ti­pli­ca­tion?

“The util­ity of A is twice the util­ity of B” is not a state­ment that re­mains true if we add the same con­stant to both util­ities, so it’s not an ob­vi­ously mean­ingful state­ment. We can make the ra­tio come out how­ever we want by perform­ing an over­all shift of the util­ity func­tion. The fact that we think of util­ities as car­di­nal num­bers doesn’t mean we as­sign any mean­ing to ra­tios of util­ities. But it seemed that you were try­ing to say that a per­son with a log­a­r­ith­mic util­ity func­tion as­sesses \$10^9 as hav­ing twice the util­ity of \$50k.

• The fact that we think of util­ities as car­di­nal num­bers doesn’t mean we as­sign any mean­ing to ra­tios of util­ities.

Kindly says the ra­tios do have rele­vance to con­sid­er­ing bets or risks.

But it seemed that you were try­ing to say that a per­son with a log­a­r­ith­mic util­ity func­tion as­sesses \$10^9 as hav­ing twice the util­ity of \$50k.

Yes, I think I see my er­ror now, but I think the force of the num­bers is clear: log util­ity in money may be more ex­treme than most peo­ple would in­tu­itively ex­pect.

• In par­tic­u­lar, it’s in­valid to say “U1 is twice as good as U2”. For that mat­ter, even if you don’t like util­ity func­tions, this is sus­pi­cious in gen­eral: what does it mean to say “I would be twice as happy if I had a mil­lion dol­lars”?

This is what I im­me­di­ately thought when I first read about the Repug­nant Con­clu­sion on Wikipe­dia, years ago be­fore hav­ing ever heard of the VNM ax­ioms or any­thing like that.

• [D]o you re­ally think if some­one handed you a billion dol­lars and you filled your world-fa­mous days com­pet­ing with Musk to reach Mars or some­thing in­sanely awe­some like that, you would only be twice as happy as when you were a low-sta­tus scrub-mon­key mak­ing 50k?

Only twice as?

Adap­ta­tion level the­ory sug­gests that both con­trast and ha­bit­u­a­tion will op­er­ate to pre­vent the win­ning of a for­tune from ele­vat­ing hap­piness as much as might be ex­pected. … As pre­dicted, lot­tery win­ners were not hap­pier than controls

It’s a well repli­cated phe­nomenon.

• Lot­tery-win­ners are self-se­lected for a num­ber of things in­clud­ing in­nu­mer­acy or fool­ish­ness and not hav­ing grand pro­jects ma­te­ri­ally ad­vanced by win­nings, and the fa­mous lot­tery win­ner ex­am­ples are for rel­a­tively small sums as far as I know—most of the win­ners in that pa­per were \$400k or less at a time of higher tax rates, with a se­ri­ous se­lec­tion is­sue there as well (less than half of the win­ners in­ter­viewed).

• 8 Nov 2012 8:59 UTC
2 points
Parent

One of the chief sug­ges­tions of pos­i­tive psy­chol­ogy is donat­ing more and buy­ing more fuzzies… and guess what is fa­vored by pro­gres­sive tax­a­tion? Donat­ing.

You don’t get to de­cide where most of your tax money goes, which I guess means that for a large frac­tion of peo­ple taxes don’t count as fuzzy-buy­ing dona­tions.

• Which is a failure mode of most peo­ple’s think­ing about taxes. Most of your tax money goes to bor­ing things you don’t want to con­cern your­self with and which you don’t have any ex­per­tise in, such that you de­cid­ing ex­actly where the money went would be dis­as­trous. Some­one with the re­quired ex­per­tise is do­ing their best to make sure the limited available money is spent care­fully on those things, in most cases.

I like to think that in gen­eral, taxes are my sub­scrip­tion fee for liv­ing in a civil­i­sa­tion rather than a feu­dal plu­toc­racy.

There are some spe­cific things my taxes are spent on that I ac­tively re­sent, but the re­sponse to that is to op­pose those spe­cific things, and I ac­cept democ­racy and de­bate as the means to (slowly and un­re­li­ably) im­prove the situ­a­tion.

• I think of taxes as a “sub­scrip­tion fee for liv­ing in a civil­i­sa­tion”, too, but I think you’re over­es­ti­mat­ing how use­ful what most of the tax money is spent on is to most of the pop­u­la­tion and un­der­es­ti­mat­ing the ex­tent to which pre­sent-day First World coun­tries are plu­toc­ra­cies.

• Well, nei­ther of us have quan­tified our es­ti­mates for the use­ful­ness of gov­ern­ment spend­ing, or bro­ken it down by sec­tor or de­mo­graph­ics. So, how much am I over­es­ti­mat­ing it, and in what spe­cific ways? :)

I live in Scot­land. I con­sider it to be a civil­ised coun­try mostly. It has good free ed­u­ca­tion and health care, and busi­nesses are reg­u­lated as to em­ploy­ment law, health and safety, and en­vi­ron­men­tal im­pact. I don’t claim more ex­per­tise in how all that gets ar­ranged than the peo­ple who ar­range it, and I would be scep­ti­cal if you did, with­out see­ing ev­i­dence.

The civil­i­sa­tion of the USA has some ex­is­ten­tial risk for feu­dal plu­toc­racy, but I think it nar­rowly avoided one of the risk fac­tors this week and I hold out some hope for steady im­prove­ment if it can stop shit­ting its pants over imag­i­nary ter­ror­ist threats and start tak­ing hu­man rights se­ri­ously again. But even if I’m wrong about that, I never said that taxes were suffi­cient to pre­vent so­cial break­down. Just nec­es­sary.

• I don’t claim more ex­per­tise in how all that gets ar­ranged than the peo­ple who ar­range it, and I would be scep­ti­cal if you did, with­out see­ing ev­i­dence.

I’m not ques­tion­ing their ex­per­tise, I’m ques­tion­ing their goals. I usu­ally try to ap­ply Han­lon’s ra­zor to sin­gle in­di­vi­d­u­als, but I’m re­luc­tant to ap­ply it to en­tire gov­ern­ments. I’m pretty sure that spend­ing on defence an amount com­pa­rable to (or, in cer­tain coun­tries, even greater than) that spent on re­search has a point, I just don’t think it’s to benefit most of the pop­u­la­tion.

The civil­i­sa­tion of the USA has some ex­is­ten­tial risk for feu­dal plu­toc­racy, but I think it nar­rowly avoided one of the risk fac­tors this week

In terms of what he’s ac­tu­ally done, as op­posed to what he says, Obama’s eco­nomic policy isn’t that differ­ent to Repub­li­cans’. Or do “is­sues like peace, im­mi­gra­tion, gay and women’s rights, prayers in school”¹ (to quote the ar­ti­cle linked) suffice to make a gov­ern­ment not count as a plu­toc­racy?

Any­way, how much have you heard about lob­by­ing, as­so­ci­a­tions such as the Bilder­berg Group or the Trilat­eral Com­mis­sion, etc.? (Un­for­tu­nately, the peo­ple who talk about those things also tend to spew out lots of non­sense about Rep­tili­ans and what­not, but I have my own hy­poth­e­sis about why they do that.)

1. When I posted that ar­ti­cle on Face­book, the only com­ment was from a gay friend of mine point­ing out that with one pres­i­dent gay rights would go back to the 1800s and with the other they might be al­lowed to marry.

• This is wan­der­ing away from the topic a bit. I doubt any­one could make a good case for any of:

• taxes are in­her­ently harm­ful and always misspent

• taxes are always spent wisely

• there ex­ists any poli­ti­cal sys­tem un­der which im­mensely rich peo­ple couldn’t wield a lot of poli­ti­cal power to try to fur­ther en­rich them­selves.

• the im­mensely rich bother to con­spire for any other pur­pose or ac­tu­ally care about poli­tics much be­yond what it can get them personally

• there is liter­ally noth­ing a demo­crat­i­cally elected gov­ern­ment can or will do to limit the poli­ti­cal power of the im­mensely rich in any way.

• there ex­ists any poli­ti­cal sys­tem un­der which im­mensely rich peo­ple couldn’t wield a lot of poli­ti­cal power to try to fur­ther en­rich them­selves.

Sure there does. A mil­i­tary dic­ta­tor­ship, for one.

• Name one where the dic­ta­tor and his cronies were not also em­bez­zling the wealth of the coun­try and liv­ing it up with their rich bud­dies. That’s what they grab power for.

Even if the guy at the top has ide­olog­i­cal prin­ci­ples that for­bid such be­havi­our (rare) and isn’t a hyp­ocrite about them (su­per rare), there is always some­one high up in the hi­er­ar­chy who is in the mar­ket for favours, and due to the na­ture of a dic­ta­to­rial hi­er­ar­chy, es­sen­tially un­touch­able.

• You’re de­scribing a situ­a­tion in which poli­ti­cally pow­er­ful peo­ple be­come rich, not one in which rich peo­ple be­come poli­ti­cally pow­er­ful.

• That’s a dis­tinc­tion with no sig­nifi­cance. Those who grab poli­ti­cal power to en­rich them­selves will ped­dle in­fluence as one way of so do­ing. Or have you got a real-life counter-ex­am­ple?

I find the offered hy­po­thet­i­cal and un­prece­dented mil­i­tary dic­ta­tor­ship where poli­ti­cal power is kept sep­a­rate from eco­nomic power … un­per­sua­sive.

• Do you have an ex­am­ple of a mil­i­tary dic­ta­tor­ship where the im­mensely rich were al­lowed to keep their wealth, but couldn’t use it to ex­ert poli­ti­cal in­fluence?

• Well, no. Not off­hand, any­way. But peo­ple can be­come rich af­ter the rev­olu­tion, and I can’t think of any ex­am­ples of peo­ple gain­ing “a lot of poli­ti­cal power to try to fur­ther en­rich them­selves” this way. Of course, those who already have such power (due to cor­rup­tion or what­ever) do tend to use it to ac­quire wealth...

EDIT: Put much bet­ter here.

• I ADBOC with the nega­tion of those state­ments (pro­vided “there ex­ists” in the third one means “there has ex­isted so far” rather than “there could ever ex­ist in prin­ci­ple”).

• That wasn’t what I meant to im­ply.

• to keep the limits of the log ar­gu­ment in mind, log 50k is 10.8 and log (50k+70k) is 11.69 and log 1 billion is 20.7

Ln \$100 is 4.6, at which point it’s doubt­ful that you can sur­vive.

• Ah, but sup­pose sub­sis­tence wages plum­meted as in Han­son’s em hell sce­nario? Ln \$100 merely shows that ‘the poor also smile’ and the util­ity-max­i­miz­ing thing is quadrillions of im­pov­er­ished minds!

• If we con­tinue to use Utility=ln(\$) then util­ities go in­finitely nega­tive as you ap­proach zero :).

• Allow­ing us to re­fute the re­pug­nant con­clu­sion. Quadrillions of minds with \$(1+e). We should start a cam­paign to use very large cur­rency units in prepa­ra­tion for the Sin­gu­lar­ity.

• guess what is fa­vored by pro­gres­sive tax­a­tion? Donat­ing.

Sort of? I mean, the pri­mary work here is be­ing done by the de­duc­tion of char­ity dona­tions by in­come. Pro­gres­sive tax­a­tion helps in that char­i­ta­ble dona­tions are cheaper the richer you are (each dol­lar given away only costs 70 cents, in­stead of 100 if there were no de­duc­tion /​ you were pay­ing no in­come taxes), but that’s shap­ing the in­cen­tive, not mak­ing it.

• … 50k … a billion dol­lars...

Sure, that’s why I said 23 and 32 rather than more sig­nifi­cant mul­ti­pli­ers.

Also: Some­times you set­tle your­self into a lo­cal max­i­mum, and even if it is not a global max­i­mum, not switch­ing may be OK if the lo­cal is not too much lower than the global max­i­mum.

fa­vored by pro­gres­sive tax­a­tion? Donating

Yes, I agree that us­ing your tax de­duc­tion gives an ex­tra boost to donat­ing.

• I re­al­ized that what both­ers me is the ne­glect of util­ity-func­tion differ­ences in the coun­ter­fac­tual world.

Should you start us­ing heroin? Let’s try to re­frame it in a way that will re­duce, or at least switch around, the bi­ases that might be in­fluenc­ing your de­ci­sion. If you were a heroin ad­dict, and had lost ev­ery­thing, and heroin were your only friend and con­so­la­tion, would you want to stop? Maybe not. So go ahead, shoot up.

If, de­spite your deep de­sire to go into clas­si­cal mu­sic as a ca­reer (which in real life you did, to your great satis­fac­tion), you had fol­lowed the money into the fi­nan­cial sec­tor, and af­ter years of 80-hours weeks, had sunk into cyn­i­cism and no longer cared for any­thing but mak­ing more money to sup­port your ex­trav­a­gant spend­ing habits, would you then want to leave the fi­nan­cial in­dus­try for a life of mu­sic and a mod­est in­come? Prob­a­bly not, so go ahead, fol­low the money, burn out your soul, and buy your­self a Porsche.

• I have trou­ble be­liev­ing that in those situ­a­tions, I’d ac­tu­ally pre­fer to be that sort of rock-bot­tom, burnt-out per­son rather than think­ing “I wish I’d made differ­ence choices when I was 20, oh fool­ish fool­ish me.”

Hav­ing been in some rather bad situ­a­tions, I’ve never once thought “Gosh, this is so much bet­ter than if I’d had a suc­cess­ful, high-pay­ing, yet en­joy­able ca­reer!”

• This method of re­duc­ing bias only works for ra­tio­nal de­ci­sions us­ing your cur­rent util­ity. Other­wise you will be prone to cir­cu­lar de­ci­sions like those you de­scribe (de­ci­sions that feed them­selves).

• Shouldn’t we in­clude the costs of mov­ing? Even if the so­cial costs are held as neg­ligible (they prob­a­bly shouldn’t be), there’s the time spent and the mon­e­tary costs of mov­ing.

• Sure, one of the things I most like about hav­ing more money is be­ing able to donate more. How­ever, the main con­sid­er­a­tion of her brother and oth­ers in these cir­cum­stances is, I strongly sus­pect, not max­i­miz­ing their dona­tion ca­pac­ity, but rather a more generic per­sonal util­ity calcu­la­tion.

• I would be in­ter­ested in an up­dated check­list. This seems po­ten­tially quite use­ful for a sin­gle post.

• Re­cent ex­am­ple from Anna: Us­ing grapefruit juice to keep up brain glu­cose, I had

The idea that will power or think­ing de­pletes brain glu­cose has been de­bunked:

• I put the check­list into an Anki deck a week or two ago that I’ve been re­view­ing (as cloze dele­tions). Sub­jec­tively it seems to have helped the rele­vant con­cepts come more read­ily to mind, al­though that could just be the CFAR work­shop (though we didn’t talk about the check­list then and some of the ideas in the check­list, like so­cial com­mit­ment mechanisms, weren’t oth­er­wise ex­plic­itly men­tioned).

• Would you mind shar­ing this deck? I would be a nice ad­di­tion to Anki decks by LW users.

• I ad­mit I’m not en­tirely sure how to share a deck.

• Ah, you are not the first! This com­ment by tgb taught me how to do. (I’m as­sum­ing you are us­ing Anki 2.)

• There are some good ideas here that I can pick up on. Among the things that I already suc­cess­fully im­ple­ment, it may sound stupid, but I think of my differ­ent brain mod­ules as differ­ent peo­ple, and have differ­ent names for them. That way I can com­pli­ment or ad­mon­ish them with­out think­ing, “Oh..kay, I’m talk­ing to my­self?” That makes it eas­ier to re­mem­ber that I’m not the only one re­act­ing and mak­ing the sole de­ci­sions, but avoids turn­ing ev­ery­thing into similar-sound­ing en­tities (me, my­self, I, my brain, my mind, etc.) Ex­am­ple: This morn­ing, I kept get­ting the feel­ing that some­thing was not quite right, I felt lighter for some rea­son. I rec­og­nized that feel­ing as Jeffery try­ing to tell me some­thing, so I had to stop and eval­u­ate what I had done that morn­ing so far. I re­al­ized that I was still wear­ing my slip­pers, and prob­a­bly would not have re­al­ized it un­til I re­tracted my kick­stand to leave for work. I gave credit where credit is due, and thought (with­out speak­ing) “Good catch, Jeffery!” (Jeffery [spel­led that way be­cause I “mistyped” it both times just now, be­fore de­cid­ing that that’s how he wants to spell it] is the one who han­dles the au­topi­lot func­tions of my daily life, and while he does his best in un­fa­mil­iar situ­a­tions, usu­ally does not con­sult and does fool­ish things un­less I have pro­grammed him with rou­tines. He is named af­ter the an­thro­po­mor­phic half chicken/​half goat/​half man pro­tec­tor of the “Deadly Maze” in Chow­der. I in­ter­preted the Deadly Maze as an alle­gory for the sub­con­scious mind.)

• In­ter­est­ing, I’ve oc­ca­sion­ally ex­per­i­mented with some­thing similar but never thought of con­tact­ing Au­topi­lot this way. Yeah, that’s what I’ll call him.

I get the feel­ing that this might be use­ful in break­ing out of some of my pro­cras­ti­na­tion pat­terns: just call Au­topi­lot and tell him which rou­tine to start. Not tested yet, as then I’d for­get about writ­ing this re­ply.

• It’s as if your own body is a guy that does his job if you train him right, but makes stupid de­ci­sions when some­thing un­ex­pected hap­pens. I just take a more literal ap­proach with the in­ter­ac­tion. I also re­fer to him as “my an­swer­ing ma­chine” when I am wo­ken up in the mid­dle of the night. It took my wife a while to re­al­ize that the per­son she was talk­ing to was “not me”. My an­swer­ing ma­chine can make perfectly nor­mal-sound­ing replies to nor­mal ques­tions, but is un­able to come up with cre­ative an­swers to un­usual ques­tions, and I have no mem­ory of the events. Another un­named, pos­si­bly sep­a­rate mod­ule runs when my body is alarmed, but I am not yet con­scious. It con­stantly asks for data, ver­bally ques­tion­ing other hu­mans nearby, “What is hap­pen­ing? What is go­ing on? What time is it?” Un­like situ­a­tions with the an­swer­ing ma­chine, I re­tain con­scious mem­ory of the oc­cur­rence, but not from a first-per­son per­spec­tive, more like I re­mem­ber some­body tel­ling me about what hap­pened, but in this case that per­son was (allegedly) me.

• Funny. I do some­thing similar- Ex­cept I call mine “Plan­ner,” “Want,” “Bum,” and “Cynic.” I never re­ally con­sid­ered my au­topi­lot mode any­thing par­tic­u­lar. Usu­ally I just do this when I am strug­gling with mo­ti­va­tion, and usu­ally those four con­cepts are the main is­sue- Plan­ning to do some­thing, then want­ing to do some­thing else, feel­ing like not do­ing any­thing, and re­al­iz­ing I’m not go­ing to do it so why bother any­way… and re­mind­ing my­self that they’re learned habits and I can get rid of it if I bring in new habits.

• This is ba­si­caly In­ter­nal Fam­ily Sys­tems Model tho its fo­cus is ther­apy, i.e. im­prov­ing dys­func­tional be­hav­ior.

But your point of reg­u­larly com­mu­ni­cat­ing with your var­i­ous ‘parts’ seems like a re­ally good idea. How well have you main­tained this as a habit since your com­ment?

• I have read this post and have not been per­suaded that peo­ple who fol­low these steps will lead longer or hap­pier lives (or will cause oth­ers to live longer or hap­pier lives). I there­fore will make no con­scious effort to pay much of any re­gard to this post, though it is plau­si­ble it will have at least a small un­con­scious effect. I am post­ing this to fight group­think and sam­pling bi­ases, though this post ac­tu­ally does very lit­tle against them.

• Longer? Prob­a­bly not. Hap­pier? Pos­si­ble, de­pend­ing on that per­son’s baseline, since we don’t know our own de­sires and ac­quiring these skills might help, but given the he­do­nic tread­mill effect, un­likely. Achiev­ing more of their in­terim goals? Pos­si­ble if not prob­a­ble. There are a lot of pos­si­ble goals aside from liv­ing longer and be­ing hap­pier.

• I have de­cided that max­i­miz­ing the in­te­gral of hap­piness with re­spect to time is my self­ish su­per­goal and that max­i­miz­ing the dou­ble in­te­gral of hap­piness with re­spect to time and with re­spect to num­ber of peo­ple is my al­tru­is­tic su­per­goal. All other goals are only rele­vant in­so­far as they af­fect the su­per­goals. I have yet to be con­vinced this is a bad sys­tem, though pre­vi­ous ex­pe­rience sug­gests I prob­a­bly will make mod­ifi­ca­tions at some point. I also need to de­cide what weight to place on the self­ish/​al­tru­is­tic com­po­nents.

But de­spite my find­ing such an ab­stract way of char­ac­ter­iz­ing my ac­tions in­ter­est­ing, the ac­tual de­ter­min­ing of the weights and the ac­tual func­tion I’m max­i­miz­ing are just de­ter­mined by what I ac­tu­ally end up do­ing. In fact con­struct­ing this ab­stract sys­tem does not seem to con­vinc­ingly help me fur­ther its pur­ported goal, and I there­fore cease all se­ri­ous con­ver­sa­tion about it.

• In fact con­struct­ing this ab­stract sys­tem does not seem to con­vinc­ingly help me fur­ther its pur­ported goal

I think this is a com­mon prob­lem. That doesn’t mean you have to give up on hav­ing your sec­ond-or­der de­sires agree with your first-or­der de­sires. It is pos­si­ble to use your ab­stract mod­els to change your day-to-day be­havi­our, and it’s definitely pos­si­ble to build a more ac­cu­rate model of your­self and then use that model to make your­self do the things you en­dorse your­self do­ing (i.e. avoid­ing hav­ing to use willpower by mak­ing what you want to want to do the “de­fault.”)

As for me, I’ve de­cided that hap­piness is too elu­sive of a goal–I’m bad at pre­dict­ing what will make me hap­pier-than-baseline, the pro­cess of ex­plic­itly pur­su­ing hap­piness seems to make it harder to achieve, and the he­do­nic tread­mill effect means that even if I did, I would have to keep work­ing at it con­stantly to stay in the same place. In­stead, I de­fault to a num­ber of proxy mea­sures: I want to be phys­i­cally fit, so I en­dorse my­self ex­er­cis­ing and prefer­ably en­joy­ing ex­er­cise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read in­ter­est­ing books; I want to have a so­cial life; I want to be a good friend. Taken all to­gether, these are at least the build­ing blocks of hap­piness, which hap­pens by it­self un­less my brain chem­istry gets too wacked out.

• So the nor­mal chain of events here would just be that I ar­gue those are still all sub­goals of in­creas­ing hap­piness and we would go back and forth about that. But this is just ar­gu­ing by defi­ni­tion, so I won’t con­tinue along that line.

To the ex­tent I un­der­stand the first para­graph in terms of what it ac­tu­ally says at the level of real-world ex­pe­rience, I have never seen ev­i­dence sup­port­ing its truth. The sec­ond para­graph seems to say what I in­tended the sec­ond para­graph of my pre­vi­ous com­ment to mean. So re­ally it doesn’t seem that we dis­agree about any­thing im­por­tant.

• But this is just ar­gu­ing by defi­ni­tion, so I won’t con­tinue along that line.

Agreed. I find it prac­ti­cal to define my goals as all of those sub­goals and not make hap­piness an ex­plicit node, be­cause it’s easy to eval­u­ate my sub­goals and mea­sure how well I’m achiev­ing them. But maybe you find it sim­pler to have only one men­tal con­struct, “hap­piness”, in­stead of lots.

The sec­ond para­graph seems to say what I in­tended the sec­ond para­graph of my pre­vi­ous com­ment to mean.

I guess I ex­plic­itly don’t al­low my­self to have ab­stract sys­tems with no mea­surable com­po­nents and/​or clear prac­ti­cal im­pli­ca­tions–my con­crete goals take up enough men­tal space. So my au­to­matic re­ac­tion was “you’re do­ing it wrong,” but it’s pos­si­ble that hav­ing an un­con­nected men­tal sys­tem doesn’t sab­o­tage your mo­ti­va­tion the same way it does mine. Also, “what I ac­tu­ally end up do­ing” doesn’t, to me, have to con­no­ta­tion of “choos­ing and achiev­ing sub­goals”, it has the con­no­ta­tion of not hav­ing goals. But it sounds like that’s not what it means to you.

• I would ar­gue that the al­tru­ism should be part of the self­ish util­ity func­tion. The rea­son that you care about other peo­ple is be­cause you value other peo­ple. If you did not value other peo­ple there is no rea­son they should be in your util­ity func­tion.

• I would ar­gue that the al­tru­ism should be part of the self­ish util­ity func­tion.

Ex­cel­lent! This nu­ance of what “self­ish” means is some­thing I find my­self re­it­er­at­ing all too fre­quently. (Where the lat­ter means I’ve done it at least three times that I can re­call.)

• This is reach­ing the point of just ar­gu­ing about defi­ni­tions, so I re­ject this line of dis­cus­sion as well.

• It’s not an ar­gu­ment about defi­ni­tions, it’s an ar­gu­ment about log­i­cal pri­or­ity. Altru­is­tic im­pulses are log­i­cally a sub­set of self­ish ones be­cause all im­pulses are self­ish be­cause they’re only ex­pe­rienced in­ter­nally. (I’m us­ing im­pulse as roughly syn­ony­mous with an ac­tion taken be­cause of val­ues). Altru­ism is only rele­vant to your moral­ity in­so­far as you value al­tru­is­tic ac­tions. Altru­ism can only be jus­tified on some­what self­ish grounds. (To clar­ify, it can be jus­tified on other grounds but I don’t think those grounds make sense.)

• all im­pulses are self­ish be­cause they’re only ex­pe­rienced in­ter­nally.

I think defin­ing “self­ish” as “any­thing ex­pe­rienced in­ter­nally” is very limit­ing defi­ni­tion that makes it a pretty use­less word. The con­cept of ‘self­ish­ness’ can only be ap­plied to hu­man be­havi­our/​mo­ti­va­tions–phys­i­cal-world phe­nom­ena like storms can’t be self­ish or un­selfish, it’s a mind-level con­cept. Thus, if you pre-define all hu­man be­havi­our/​mo­ti­va­tions as self­ish, you’re rul­ing out the op­po­site of self­ish­ness ex­ist­ing at all. Which means you might as well not bother with us­ing the word “self­ish” at all, since there’s noth­ing that isn’t self­ish.

There’s also the ar­gu­ment of com­mon us­age–it doesn’t mat­ter how you define a word in your head, com­mu­ni­ca­tion is with other peo­ple, who have their own defi­ni­tion of that word in their heads, and most peo­ple’s defi­ni­tions are likely to be the com­mon us­age of the word, since how else would they learn what the word means? Most peo­ple define “self­ish­ness” such that some im­pulses are self­ish (i.e. Sally tak­ing the last piece of cake be­cause she likes cake) and some are not self­ish (Sally giv­ing Jack the last piece of cake, even though she wants it, be­cause Jack hasn’t had any cake yet and she already had a piece.) Ob­vi­ously both of those re­ac­tions are the re­sult of im­pulses bounc­ing around be­tween neu­rons, but since we don’t have in­tro­spec­tive ac­cess to our neu­rons firing, it’s mean­ingful for most peo­ple to use self­ish­ness or un­selfish­ness as la­bels.

• To com­ment on the lin­guis­tic is­sue, yes this par­tic­u­lar ar­gu­ment is silly, but I do think it is le­gi­t­i­mate to define a word and then later dis­cover it points out some­thing triv­ial or nonex­is­tent. Like if we dis­cov­ered that ev­ery­one would wire­head rather than ac­tu­ally help other peo­ple in ev­ery case, then we might say “welp, guess all drives are self­ish” or some­thing.

• Sally doesn’t give Jack the cake be­cause Jack hasn’t had any, rather, Sally gives Jack the cake be­cause she wants to. That’s why ex­plic­itly call­ing the mo­ti­va­tion self­ish is use­ful, be­cause it clar­ifies that obli­ga­tions are still sub­jec­tive and rooted in in­di­vi­d­ual val­ues (it also clar­ifies that obli­ga­tions don’t man­date sac­ri­fice or as­ceti­cism or any other similar non­sense). You say that it’s ob­vi­ous that all ac­tions oc­cur from in­ter­nally mo­ti­vated states as a re­sult of neu­rons firing, but it’s not ob­vi­ous to most peo­ple, which is why point­ing out that the ac­tion stems from the in­ter­nal de­sires of Sally is still use­ful.

• Why not just spec­ify to peo­ple that mo­ti­va­tions or obli­ga­tions are “sub­jec­tive and rooted in in­di­vi­d­ual val­ues”? Then you don’t have to bring in the word “self­ish”, with all its com­mon-us­age con­no­ta­tions.

• I want those com­mon-us­age con­no­ta­tions brought in be­cause I want to erad­i­cate the taboo around those com­mon-us­age con­no­ta­tions, I guess. I think that peo­ple are vil­ified for be­ing self­ish in lots of situ­a­tions where be­ing self­ish is a good thing, at least from that per­son’s per­spec­tive. I don’t think that peo­ple should ever get mad at defec­tors in Pri­soner’s Dilem­mas, for ex­am­ple, and I think that say­ing that all of moral­ity is self­ish is a good way to fix this kind of prob­lem.

• This line of dis­cus­sion says noth­ing on the ob­ject level. The words “al­tru­is­tic” and “self­ish” in this con­ver­sa­tion have ceased to mean any­thing that any­one could use to mean­ingfully al­ter his or her real world be­hav­ior.

• Altru­is­tic be­hav­ior is usu­ally thought of as mo­ti­vated by com­pas­sion or car­ing for oth­ers, so I think you are wrong. You are the one ar­gu­ing about defi­ni­tions in or­der to triv­ial­ize my point, if any­thing.

• The rea­son I re­jected the util­ity func­tion and why I re­jected this ar­gu­ment is that I judged them use­less.

What would you recom­mend peo­ple do, in gen­eral? I think this is a ques­tion that is ac­tu­ally valuable. At the least I would benefit from con­sid­er­ing other peo­ple’s an­swers to this ques­tion.

• I don’t un­der­stand how your re­ply is re­spon­sive.

I recom­mend that peo­ple act in ac­cor­dance with their (self­ish) val­ues be­cause no other val­ues are situ­ated so as to be mo­ti­va­tional. Mo­ti­va­tion and val­ues are brute facts, chem­i­cal pro­cesses that hap­pen in in­di­vi­d­ual brains, but that ac­tu­ally gives them an in­fluence be­yond that of mere rea­son, which could never pro­duce obli­ga­tions. My sys­tem also offers a solu­tion to the paral­y­sis brought on by in­fini­tar­ian ethics—it’s not the ag­gre­gate amount of well be­ing that mat­ters, it’s only mine.

Be­cause I be­lieve this, rec­og­niz­ing that al­tru­ism is a sub­set of ego­ism is im­por­tant for my sys­tem of ethics. I still be­lieve in al­tru­is­tic be­hav­ior, but only that which is mo­ti­vated by em­pa­thy as op­posed to some ab­stract sense of duty or fear of God’s wrath or some­thing.

Does my po­si­tion make more sense now?

• Do you dis­agree with any mat­ters of fact that I have as­serted or im­plied? When you try to have a dis­cus­sion like you are try­ing to have, about “log­i­cal ne­ces­sity” and so on, you are just ar­gu­ing about words. What do you pre­dict about the world that is differ­ent from what I pre­dict?

• I think that it is im­por­tant to rec­og­nize the re­la­tion­ship be­tween thought pro­cesses be­cause hav­ing a well or­ga­nized mind al­lows us to change our minds more effi­ciently which im­proves the qual­ity of our pre­dic­tions. So long as you rec­og­nize that all moral be­hav­ior is mo­ti­vated by in­ter­nal ex­pe­riences and val­ues I don’t re­ally care what you call it.

• It’s much less pretty than the PDF, but if any­one else wants a spread­sheet with write-in-able blanks, I have made a Google doc.

• I’m cur­rently try­ing to eval­u­ate how to ad­just some of these for prob­lems re­lated to men­tal ill­ness. For ex­am­ple, 4.3:

If I find my thoughts cir­cling around a par­tic­u­lar word, I try to taboo the word, i.e., think with­out us­ing that word or any of its syn­onyms or equiv­a­lent con­cepts. (E.g. won­der­ing whether you’re “smart enough”, whether your part­ner is “in­con­sid­er­ate”, or if you’re “try­ing to do the right thing”.)

When­ever I taboo words, I start de­vel­op­ing pres­sured speech, and be­gin mum­bling the tabooed words sub­con­sciously. If I con­tinue to try to force the taboo, this even­tu­ally de­vel­ops into self-harm­ing be­hav­ior.

Another ex­am­ple is 5.2:

I quan­tify con­se­quences—how of­ten, how long, how in­tense.

When­ever I at­tempt to quan­tify con­se­quences, I have to push through ab­surd imag­in­ings—if I be­lieve some­one is an­gry at me, even if they’re a good friend, my imag­i­na­tion tends to pro­duce vivid imagery of them dis­mem­ber­ing, rap­ing and tor­tur­ing me while si­mul­ta­neously perform­ing ac­tions to keep me al­ive longer, even if I know they don’t even pos­sess the skills nec­es­sary to perform the acts I’m imag­in­ing. It takes an ex­traor­di­nary amount of men­tal effort and en­ergy to push through that to ac­tu­ally quan­tify con­se­quences.

Another ex­am­ple is 6.2:

I talk to my friends or de­liber­ately use other so­cial com­mit­ment mechanisms on my­self.

I tend to not have very many friends that I can com­mit to, and when I do, I tend to only use com­mit­ment to perform a self-sham­ing and self-pun­ish­ment cy­cle, rather than to ac­tu­ally goad me to perform the de­sired be­hav­ior.

• Is your men­tal ill­ness be­ing treated? Are you see­ing some­one trained & ex­pe­rienced in man­ag­ing men­tal ill­ness? I would put much, much more em­pha­sis on get­ting to a place where you aren’t self-harm­ing than on try­ing to de­velop ra­tio­nal­ity habits, es­pe­cially if the lat­ter seems to be in­terfer­ing with the former.

• No, be­cause I’m cur­rently not good at keep­ing a job, and equally not good at nav­i­gat­ing the bu­reau­cra­cies nec­es­sary to suckle on the gov­ern­ment’s teat. “Get­ting to a place where I’m not self-harm­ing” is a nice pipe dream, but as it is, we op­ti­mise for those goals which we can ac­tu­ally stand a rea­son­able chance of ac­com­plish­ing.

Put an­other way, if P(n)-sub-t is my prob­a­bil­ity of get­ting into ther­apy af­ter ex­pend­ing n units of re­source on get­ting into ther­apy, and U(n)-sub-t is the util­ity of get­ting ther­apy af­ter spend­ing n units of re­source on get­ting into ther­apy, and P(n)-sub-r is the prob­a­bil­ity of be­com­ing more ra­tio­nal af­ter spend­ing n units try­ing to be­come ra­tio­nal, and U(n)-sub-r is the util­ity of be­com­ing more ra­tio­nal af­ter spend­ing n units be­com­ing ra­tio­nal, and I only have n re­source units available, then if P(n)-sub-t U(n)-sub-t < P(n)-sub-r U(n)-sub-r, then I know what to spend those n re­source units on, no mat­ter how much P(n+delta)-sub-t U(n+delta)-sub-t > P(n+delta)-sub-r U(n+delta)-sub-r, be­cause I don’t have that ex­tra delta worth of re­source units.

Some­times poor peo­ple make what looks like bad choices from the out­side be­cause it’s the best choice they have.

• I’m not much for suck­ling on the gov­ern­ment’s teat ei­ther. How much of a chance do you think you’d have of keep­ing a job if you put your mind to it?

There could be other op­tions aside from ther­apy. A lot of peo­ple that I re­spect have recom­mend Nathaniel Bran­den’s books. I have heard some about In­ter­nal Fam­ily Sys­tems (IFS) as well, which as far as I know can be done by your­self. I’m by no means an ex­pert, but maybe these can act as leads for you to get started on your own (pre­sum­ing you haven’t already looked into them).

• How much of a chance do you think you’d have of keep­ing a job if you put your mind to it?

Em­piri­cally, a very poor one. Or rather, more ac­cu­rately: I ei­ther have a very poor chance of keep­ing a job if I put my mind to it, OR I have a very poor chance of putting my mind to it. I’m not sure how to tell which is ac­tu­ally the case, right now, but maybe I could tell if I ac­tu­ally put my mind to it (heh).

Un­for­tu­nately, since “putting my mind to things” is a big part of what’s ac­tu­ally bro­ken, I’m not sure where to pro­ceed—or even whether I should pro­ceed. Often times, my strongest im­pulse leans to­wards slap­ping a big “DEFECTIVE” la­bel on my fore­head and toss­ing my­self in the re­cy­cle bin.

• I urge you to strongly con­sider the pos­si­bil­ity that your mind is tel­ling you that you don’t like this kind of work. At best, defec­tive is a cir­cu­lar la­bel, not an an­a­lyt­i­cal re­sult of your per­son­al­ity.

That may not be the most use­ful in­for­ma­tion, eco­nom­i­cally speak­ing. But it may help you avoid gen­er­al­iz­ing your ex­pe­riences at the cur­rent job on to fu­ture jobs. In short, you aren’t lazy, you just haven’t found situ­a­tions that put you in a po­si­tion to suc­ceed (by en­sur­ing suffi­cient ap­pro­pri­ate mo­ti­va­tion).

• I used to think that way. The frus­trat­ing thing is, I used to LOVE work of all kind. What I hated was peo­ple with ar­bi­trary power over me de­liber­ately sab­o­tag­ing my work, mostly (it seemed) be­cause they were an­gry that I en­joyed it so much. One of the most pow­er­ful les­sons I ever learned was that peo­ple at my so­cioe­co­nomic level don’t GET to “en­joy” their work. Even by ac­ci­dent.

I never re­ally learned diplo­macy and power poli­tics, pri­mar­ily due to be­ing taught a form of “learned hel­pless­ness” about it when I was very young (I was not in a so­cioe­co­nomic class where it was ap­pro­pri­ate to dis­play the amount of en­thu­si­asm, tal­ent and in­tel­li­gence that I had, and I didn’t know how to hide it).

Un­for­tu­nately, this led to mak­ing a lot of re­ally, re­ally bad poli­ti­cal mis­takes, each of which slowly eroded my en­thu­si­asm at do­ing… well, at this point, at do­ing any­thing.

After a few years of be­ing out of prac­tice, I now find that I can’t even bring my­self to get out of bed in the morn­ing and work on some­thing in­ter­est­ing, be­cause “what’s the point?”

To me, there is NO differ­ence be­tween “lazy” and “haven’t found situ­a­tions that put you in a po­si­tion to suc­ceed”. They are IDENTICAL. If so­ciety doesn’t put you in po­si­tions to suc­ceed, it has de­cided that you are lazy, and that means you ARE lazy. Agency has noth­ing to do with cul­pa­bil­ity, only blame.

• Your rules seemed de­signed to sab­o­tage you by mak­ing you feel mis­er­able. The im­pulse to cre­ate scripts of how in­ter­ac­tions are sup­posed to go is a good one, but the point of these scripts is to pre­pare you to suc­ceed.

You need a new so­cial en­vi­ron­ment. If none of the peo­ple you hang out with is re­ally your friend, stop spend­ing time with them. Par­tic­u­larly if they aren’t emo­tion­ally safe.

We talked about boardgam­ing as one pos­si­ble new en­vi­ron­ment. What about char­i­ta­ble vol­un­teer­ing. If you find the right char­ity, the or­ga­ni­za­tions are des­per­ate for your help.

Re­gard­less of what spe­cific thing you do, find some­thing to suc­ceed at. Don’t set the bar ridicu­lously high—if what you can do is show up, then find some­thing where show­ing up is suc­cess. You are ab­solutely worth it. Your nega­tive feel­ings are a habit that you can break.

Where do you live? Maybe I can help? (Pri­vate mes­sage if you pre­fer).

• This post is be­ing made while re­press­ing a mas­sive ar­ray of scripted re­sponses, so if it bounces around or seems in­co­her­ent, it’s be­cause only a VERY small por­tion of my brain­power is cur­rently available for ra­tio­nal anal­y­sis.

1. I tend to sab­o­tage friend­ships, due to be­ing in­her­ently dis­trust­ful /​ un­trust­wor­thy (my cyn­i­cal dis­po­si­tion has led me to be­lieve that these are ul­ti­mately the same thing). Thus, your offer to help per­son­ally is ad­mirable, but I have a very high thresh­old to pass be­fore I can trust it as ac­tu­ally helpful. Does this make sense?

2. I’ve performed ac­tions of char­i­ta­ble vol­un­teer­ing, but over the past few years I’ve had very lit­tle en­ergy for any­thing. I tend to have less than half an hour’s worth of use­ful en­ergy per day for any­thing that in­volves leav­ing my lit­tle hovel, and by the end of that half an hour I tend to start so­cially self-de­struc­t­ing.

3. It’s not as much a prob­lem that friends aren’t emo­tion­ally safe for me, as that I am not emo­tion­ally safe for me. Ac­tual friends tend to ac­tu­ally em­pathize, which means that they quickly be­come freaked out and leave when they re­al­ize how hel­pless they are to do any­thing but watch me self-harm. This pro­vides a filter that en­sures that when I DO ab­solutely need emo­tional in­ter­ac­tion with other hu­man be­ings, the only ones who are left are the ones who don’t care as much about the waves of mis­ery I’m ex­ud­ing.

• Thus, your offer to help per­son­ally is ad­mirable, but I have a very high thresh­old to pass be­fore I can trust it as ac­tu­ally helpful. Does this make sense?

Makes sense. Whether you be­lieve it or not, I’m not do­ing this for my benefit. I care about you, and so does ev­ery­one else who is offer­ing you ad­vice.

This post is be­ing made while re­press­ing a mas­sive ar­ray of scripted re­sponses.

Do you think these scripts make you hap­pier? Are there changes to the scripts that you can imag­ine that would cause them to make you hap­pier?

More gen­er­ally, is there any change you could make in your life that you think you would re­ally make that would lead to any in­creased hap­piness? If there are rea­sons to not make that change, do you think the rea­sons are re­al­is­tic in like­li­hood and it mag­ni­tude?

My ex­pe­rience with anx­iety is that the feel­ings never went away, I just got bet­ter at do­ing what I thought needed do­ing, even with the anx­ious feel­ings.

• Do you think these scripts make you hap­pier?

No, but I have spent al­most 30 years do­ing script-mod­ifi­ca­tion, and I be sore tired.

Are there changes to the scripts that you can imag­ine that would cause them to make you hap­pier?

Pos­si­bly, but the effort in­volved in do­ing more script-mod­ifi­ca­tion is no longer some­thing I have the en­ergy for.

My ex­pe­rience with anx­iety is that the feel­ings never went away, I just got bet­ter at do­ing what I thought needed do­ing, even with the anx­ious feel­ings.

Ab­solutely. That’s how I de­scribe most of what peo­ple call my “su­per-pow­ers”. I tend to be amaz­ingly com­pe­tent in crisis situ­a­tions, sim­ply be­cause I don’t panic, I im­me­di­ately as­sess the best plan of ac­tion, I iden­tify ev­ery­one who is pan­ick­ing, and I im­me­di­ately give them short com­mands that are clearly iden­ti­fi­able as helping the situ­a­tion, so they feel like they can ac­tu­ally do some­thing about what­ever’s ter­rify­ing them. Peo­ple have asked me how I man­age to be com­pletely un­afraid of life-or-death situ­a­tions, and I’ve sim­ply ex­plained “of course I’m com­pletely ter­rified. I just do it any­ways.” (and then I usu­ally go throw up, be­cause if the situ­a­tion has calmed enough that peo­ple can ask me how I pul­led it off, then the situ­a­tion has calmed enough that I can go throw up).

More gen­er­ally, is there any change you could make in your life that you think you would re­ally make that would lead to any in­creased hap­piness? If there are rea­sons to not make that change, do you think the rea­sons are re­al­is­tic in like­li­hood and it mag­ni­tude?

The prob­lem is, I’ve already tried to solve this prob­lem by edit­ing out “per­sonal hap­piness” as a goal to seek. I spent about 5 years on this, and in the pro­cess have man­aged to edit out a good amount of per­sonal iden­tity, self-preser­va­tion, and so on. It turns out there are biolog­i­cal safe­guards in place that keep me from go­ing all the way with it, so what I’ve got is a col­lec­tion of ex­traor­di­nar­ily buggy and non-adap­tive scripts, usu­ally run­ning in di­rect com­pe­ti­tion with each other and ty­ing up all my sys­tem re­sources with­out ac­tu­ally ac­com­plish­ing any­thing what­so­ever.

Of course, since they’re us­ing up all my sys­tem re­sources, I no longer have enough free pro­ces­sor or swap space to fur­ther mod­ify my scripts. I’m kinda stuck with­out out­side re­sources, and I’m no longer ca­pa­ble of gen­er­at­ing those.

But ul­ti­mately, neu­rolog­i­cal and biolog­i­cal sys­tems are in­cred­ibly com­plex, and they all (so far as we know) break down even­tu­ally. I don’t think this break­down pro­cess is par­tic­u­larly ex­traor­di­nary or note­wor­thy, com­pared to any other pos­si­ble way that I could de­grade into non-func­tion­al­ity.

• The prob­lem is, I’ve already tried to solve this prob­lem by edit­ing out “per­sonal hap­piness” as a goal to seek.

Do you think that re­mov­ing per­sonal hap­piness as one of your goals has helped you be more pro­duc­tive? What could you take to add some amount of per­sonal hap­piness as one of your goals? Would that be worth­while?

Do you think it is likely that you would take those steps? If there are rea­sons to not make that change, do you think the rea­sons are re­al­is­tic in like­li­hood and it mag­ni­tude?

(I’m ask­ing ques­tions be­cause I hope this will help more than other types of in­ter­ac­tions. There’s no rea­son that you should feel obli­gated to be emo­tion­ally vuln­er­a­ble to­wards me. Without emo­tional vuln­er­a­bil­ity—from tak­ing apart your per­son­al­ity—spe­cific sug­ges­tions /​ in­struc­tions about what to change can eas­ily be taken the wrong way. But if ques­tions like this are com­ing off as pas­sive-ag­gres­sive, I want to stop.)

• Have you tried be­ing a vol­un­teer fire­fighter?

• Ac­tu­ally, yes! Two years ago. I spent about 2 years be­fore­hand get­ting into the best shape I had ever been in in my life—took Capoeira, spent an hour a day in the gym, ran 3 miles ev­ery morn­ing—I set a goal that as soon as I broke 150 lbs (start­ing from 110), I’d go in and ap­ply.

Still didn’t pass the phys­i­cal.

• Also, this (warn­ing, quite emo­tion­ally raw).

• Heh. Believe it or not, that’s not as much of a prob­lem. I’ve lived with con­stant suici­dal ideation for al­most 27 years now, since I was 12. I’ve be­come al­most com­pletely in­ured to it, and I’ve performed enough un­suc­cess­ful at­tempts that my mid-brain has learned very well not to bother. It’s amus­ing to think that learned hel­pless­ness can be turned into a tool to com­bat suici­dal ideation, but there it is. (I imag­ine this is why so many anti-de­pres­sants in­crease the risk of suicide—the learned hel­pless­ness is a tighter cy­cle, so it gets lifted faster, at which point the ideation hasn’t faded yet and sud­denly you imag­ine the pos­si­bil­ity of some­thing ac­tu­ally work­ing, and it all fi­nally be­ing over for real.)

• Thanks for post­ing this. I always en­joy these “in-prac­tice” ori­ented posts, as I feel they help me check if I truly un­der­stand the con­cepts I learn here, in a similar way that ex­am­ple prob­lems in text­books check if I know how to cor­rectly ap­ply the ma­te­rial I just read.

• 7 Nov 2012 16:16 UTC
2 points

This is awe­some. I might re­move the ex­am­ples, print down the rest of the list, and read it ev­ery morn­ing when I get up and ev­ery night be­fore go­ing to sleep. OTOH I have a few quib­bles with some ex­am­ples:

Re­cent ex­am­ple from Anna: Jump­ing off the Strato­sphere Ho­tel in Las Ve­gas in a wire-guided fall. I knew it was safe based on 40,000 data points of peo­ple do­ing it with­out sig­nifi­cant in­jury, but to per­suade my brain I had to vi­su­al­ize 2 times the pop­u­la­tion of my col­lege jump­ing off and sur­viv­ing. Also, my brain some­times seems much more pes­simistic, es­pe­cially about so­cial things, than I am, and is al­most always wrong.

For some rea­son my brain is more com­fortable work­ing with num­bers that with vi­su­al­iza­tions, in­stead. That can be bad for sig­nal­ling: a few years ago there was a ter­ror­ist at­tack in Lon­don which af­fected IIRC about 300 peo­ple; my mother told me “you should call [your friend who’s there] and ask him if he’s all right”, and I an­swered “there are 10 mil­lion peo­ple in Lon­don, so the prob­a­bil­ity that he was in­volved is about 1 in 30,000, which is less than the prob­a­bil­ity that he would die nat­u­rally in...”; my mother called me heartless be­fore I even finished the sen­tence.

Re­cent ex­am­ple from Anna’s brother: Try­ing to de­cide whether to move to Sili­con Valley and look for a higher-pay­ing pro­gram­ming job, he tried a re­frame to avoid the sta­tus quo bias: If he was liv­ing in Sili­con Valley already, would he ac­cept a \$70K pay cut to move to Santa Bar­bara with his col­lege friends? (An­swer: No.)

There’s a huge differ­ence: some­one liv­ing in Sili­con Valley on \$70K + x and con­sid­er­ing whether to stay there or move to Santa Bar­bara and earn x would be used to liv­ing on \$70K + x; whereas some­one liv­ing in Santa Bar­bara on x and con­sid­er­ing whether to move to Sili­con Valley and earn x + \$70K or stay there would be used to liv­ing on x. This would af­fect how much each of them would en­joy a given amount of money. Also, the former would already have a so­cial cir­cle in Sili­con Valley, and the lat­ter wouldn’t.

Re­cent ex­am­ple from Anna: I no­ticed that ev­ery time I hit ‘Send’ on an email, I was vi­su­al­iz­ing all the ways the re­cip­i­ent might re­spond poorly or some­thing else might go wrong, nega­tively re­in­forc­ing the be­hav­ior of send­ing emails. I’ve (a) stopped do­ing that (b) in­stalled a habit of smil­ing each time I hit ‘Send’ (which pro­vides my brain a jolt of pos­i­tive re­in­force­ment). This has re­sulted in strongly re­duced pro­cras­ti­na­tion about emails.

Huh, no. If they are likely to re­spond badly, I want to be­lieve they are likely to re­spond badly. If they aren’t likely to re­spond badly, I want to be­lieve they aren’t likely to re­spond badly. What is true is already so, own­ing it up doesn’t make it worse. The solu­tion to that prob­lem is to think twice and re-read the email and think about ways to make it less likely for it to be in­ter­preted in an un­in­tended way be­fore hit­ting Send.

• my mother told me “you should call [your friend who’s there] and ask him if he’s all right”, and I an­swered “there are 10 mil­lion peo­ple in Lon­don, so the prob­a­bil­ity that he was in­volved is about 1 in 30,000, which is less than the prob­a­bil­ity that he would die nat­u­rally in...”; my mother called me heartless be­fore I even finished the sen­tence.

Your math is right but your mother has the right in­ter­pre­ta­tion of the situ­a­tion. If your friend is dead, call­ing him does nei­ther of you any good! This is a 29,999 out of 30,000 chance to earn brownie points.

• A differ­ent ap­proach might be to do the math on how likely it is that some­one the friend knows was in­volved in the in­ci­dent. Or maybe just call to dis­cuss the pos­si­ble reper­cus­sions and the prob­a­ble over­re­ac­tions that the lo­cal gov­ern­ment will have.

How­ever, for most of my own friends, if I did call them in ex­actly such a situ­a­tion, they’d tell me al­most ex­actly what army1987 said to their mother. Un­less they hap­pened to be dead or lost a friend to the event or some­thing.

• Huh, no. If they are likely to re­spond badly, I want to be­lieve they are likely to re­spond badly. If they aren’t likely to re­spond badly, I want to be­lieve they aren’t likely to re­spond badly. What is true is already so, own­ing it up doesn’t make it worse. The solu­tion to that prob­lem is to think twice and re-read the email and think about ways to make it less likely for it to be in­ter­preted in an un­in­tended way be­fore hit­ting Send.

The thing is, it seems quite clear that the prob­lem wasn’t about how likely they are to re­spond badly, but that Anna (?) would vi­su­al­ize and an­ti­ci­pate the nega­tive re­sponse be­fore­hand based on no ev­i­dence that they would re­spond poorly, sim­ply as a pro­grammed men­tal habit. This would end up cre­at­ing a vi­cious cir­cle where each time the nega­tives from past times make it even more likely that this time it feels bad, re­gard­less of the ac­tual re­ac­tions.

The tac­tic of smil­ing re­in­forces the ac­tion of send­ing emails in­stead of ter­ror­iz­ing your­self into never send­ing emails any­more (which I in­fer from con­text would be a bad thing), and once you’re rid of the loom­ing vi­cious cir­cle you can then base your pre­dic­tions of the re­ac­tion on the con­tent of the email, rather than have it be pre­de­ter­mined by your own feel­ings.

(Obli­ga­tory nit­picker’s note: I agree with pretty much ev­ery­thing you said, I just didn’t think that the real event in that ex­am­ple had a bad de­ci­sion as you seemed to im­ply.)

• This is awe­some. I might re­move the ex­am­ples, print down the rest of the list, and read it ev­ery morn­ing when I get up and ev­ery night be­fore go­ing to sleep.

In­ter­est­ing you should say that. About a week ago I sim­plified this into a more literal check­list de­signed to be used as part of a nightly wind-down, to see if it could main­tain or in­still habits. I de­signed the check­list based largely on em­piri­cal re­sults from NASA’s re­view of the fac­tors for effec­tive­ness of pre-flight safety check­lists used by pi­lots, al­though I chased down a num­ber of other check­list-re­lated re­sources. I’m cur­rently ac­tively test­ing effects on my­self and oth­ers, both try­ing to test to make sure it would ac­tu­ally be used, and get­ting the time down to the min­i­mum pos­si­ble (it’s hov­er­ing around two min­utes).

P.S. I’m not as­so­ci­ated with CFAR but the check­list is an ex­per­i­ment on their re­quest.

If you were to test your sug­ges­tion for two weeks, V would or in­ter­ested to hear the re­sults. My pre­dic­tion (with 80% cer­tainty) is: You will get pos­i­tive re­sults for a night or two. Within ten days, you will find the list aver­sive /​ too much work and stop read­ing it, be­gin to glance over it with­out pro­cess­ing any­thing, or ac­tively stop to fix one of the above prob­lems. (The army name makes me less cer­tain than usual—my stereo­type says you may be bored and/​or dis­ci­plined.)

• Can you point us to the more in­ter­est­ing check­list re­sources?

• Ab­solutely. I can give bet­ter re­sources if you can be more spe­cific as to what you’re look­ing for.

I recom­mend The Check­list Man­i­festo first as an overview, as well as a ba­sic un­der­stand­ing of akra­sia, and try­ing and failing to make and use some check­lists your­self.

The re­sources I spent most of my time with were very spe­cific to what I was work­ing on, and so I wouldn’t recom­mend them. How­ever, just in case some­one finds it use­ful, Hu­man Fac­tors of Flight-Deck Check­lists: The Nor­mal Check­list draws at­ten­tion to some com­mon failure modes of check­lists out­side the check­list it­self.

• 1 Dec 2012 1:56 UTC
0 points
Parent

You will get pos­i­tive re­sults for a night or two. Within ten days, you will find the list aver­sive /​ too much work and stop read­ing it, be­gin to glance over it with­out pro­cess­ing any­thing, or ac­tively stop to fix one of the above prob­lems.

That’s in­deed what hap­pened.

(The army name makes me less cer­tain than usual—my stereo­type says you may be bored and/​or dis­ci­plined.)

That’s just a hypocorism for my first name. I have never been in the armed forces. (I re­gret pick­ing this nick­name be­cause it has gen­er­ated con­fu­sion sev­eral times, but I’ve used it on the In­ter­net ever since I was 12 and I’m kind of used to it.)

• This sounds in­ter­est­ing. I wasn’t en­tirely se­ri­ous, but I’m go­ing to do this for real now. (I haven’t de­coded the rot13ed part, of course.)

• You have the right con­clu­sion but the wrong rea­son. Most peo­ple would ap­pre­ci­ate be­ing thought of in a dis­aster, so call­ing him if he’s al­ive would be good—ex­cept that the phone net­works, par­tic­u­larly cell net­works, tend to be crip­pled by overuse in sud­den dis­asters. Stay­ing off the phones if you don’t need to make a call helps with this.

• What about “when faced with a hard prob­lem, close your eyes, clear your mind and fo­cus your at­ten­tion for a few min­utes to the is­sue at hand”?

It sounds so very sim­ple, that I rou­tinely fail to do it when, e.g. I try to solve some pro­ject eu­ler prob­lem or an­other, and I don’t see a solu­tion in the first few sec­onds, do some­thing else for a while, un­til I fi­nally get a han­dle on my slip­pery mind, sit down and solve the bloody thing.

• Looks like a very use­ful list. One com­ment: I found the ex­am­ple in 2(a) a bit com­pli­cated and very difficult to parse.

• Some­thing to add: al­lo­cat­ing at­ten­tion in the cor­rect or­der:

1. emotions

2. felt meaning

3. ver­bal thoughts

Other­wise you have the failure mode of avoid­ing painful emo­tions (even if they’re be­ing trig­gered er­ro­neously) and then all sorts of bad things hap­pen. So check in with (1) be­fore (2) and (3). And check in with (2) be­fore ap­ply­ing (3), be­cause oth­er­wise you’re us­ing cached thoughts.

• At some point I started feel­ing like my bf is more in­ter­ested in tel­ling me things than hav­ing a con­ver­sa­tion with me. So I started try­ing to flag the in­stances where he did it and the in­stances where he didn’t, and it kinda felt like it matched my feel­ing since I had sev­eral more ex­am­ples of one than the other. But I didn’t doc­u­ment then care­fully or any­thing, so how do I know I’m not fal­ling into the con­fir­ma­tion bias trap? Or is this just the wrong way to han­dle some­thing that started out as a … feel­ing?

• In your po­si­tion, I would do a few differ­ent things.

One is what you de­scribe: ac­tu­ally count in­stances and see if the pat­tern con­forms to my ex­pec­ta­tions.

But also, I would try to ar­tic­u­late more clearly what the choices are. That is, what do I look for when I want to see if he is in­ter­ested in hav­ing a con­ver­sa­tion? Am I look­ing for him to listen to what I have to say? To ask ques­tions about it? To not challenge it when he dis­agrees? To look di­rectly at me and not do other things while I’m talk­ing? To al­low me to pause in the mid­dle of what I’m say­ing with out treat­ing that as an op­por­tu­nity to change the sub­ject? Some­thing else? All of the above?

Also, I would ask my­self what would fol­low if it turned out that I was over­count­ing con­fir­ma­tions? That is, let’s say I con­clude that one thing that makes me feel like my boyfriend isn’t in­ter­ested in hav­ing a con­ver­sa­tion with me is when he in­ter­rupts me. I might ask my­self, sup­pose I start ac­tu­ally count­ing in­stances and I con­clude that he only in­ter­rupts me one con­ver­sa­tion out of ten, when I had es­ti­mated it was nine con­ver­sa­tions out of ten. It is likely, then, that I’d suc­cumbed to con­fir­ma­tion bias.

But… what fol­lows from that?

One pos­si­bil­ity is “Oh… well, 10% in­ter­rup­tions isn’t that big a deal. I should get over it.”
Another pos­si­bil­ity is “Clearly, 10% in­ter­rup­tions is enough to up­set me. We should try for a lower rate.”

Know­ing how I would go about mak­ing that choice for a mea­sured prob­a­bil­ity once I have it is, IME, an im­por­tant part of ac­tu­ally im­prov­ing the sys­tem. Other­wise I’m just mak­ing mea­sure­ments.

• Clearly, 10% in­ter­rup­tions is enough to up­set me. We should try for a lower rate.

I’m con­fused why she should mea­sure it at all. This line of rea­son­ing seems to pre­clude the need for mea­sure­ment.

• Yeah, I think this is the hard­est part be­cause in some cases, ex­am­in­ing the ac­tual facts does make me feel bet­ter. But in this case, if it does turn out to be 10% but the bad feel­ing doesn’t go away, I’m go­ing to feel like a jerk. Also, it’s im­pos­si­ble to com­pare to the past at this point, which is when it felt like we had more real con­ver­sa­tions, but I have no data from it be­cause back then I didn’t have any rea­son to track it.

• if it does turn out to be 10% but the bad feel­ing doesn’t go away, I’m go­ing to feel like a jerk

Why?

• To break con­fir­ma­tion bias, you need an ob­jec­tive log. Write down ev­ery time you rec­og­nize a con­firm­ing event, as well as ev­ery time you rec­og­nize an even which is non­con­firm­ing. Then, es­ti­mate the like­li­hood that you would rec­og­nize and write down a con­firm­ing event, and the like­li­hood that you would rec­og­nize and write down a non­con­form­ing event. Use your sur­prise that a non­con­firm­ing event just oc­curred, as well as your sur­prise that you no­ticed it and made a note of it to form that es­ti­mate.

If you find your­self more sur­prised that you made a not of a non­con­firm­ing event than that it hap­pened, it prob­a­bly hap­pens much more of­ten than you note it.

• This seems tricky. What is (I would guess) im­por­tant about your situ­a­tion is that you want to have more con­ver­sa­tions with him. So hey, if you want to have more con­ver­sa­tions, do things that will re­sult in that hap­pen­ing.

If your num­ber of con­ver­sa­tions changes no­tice­ably and that feel­ing doesn’t go away, or you get the same feel­ing about some­thing else in­stead, then yeah, maybe the root cause is some­thing else. (It’s like when I’m pro­cras­ti­nat­ing and I feel like I re­ally want to visit web­site X, and then I feel I re­ally want to read book Y, but the feel­ing is re­ally just “pro­cras­ti­na­tion-feel­ing” from not want­ing to start chore Z.)

• The PDF ver­sion is very nice look­ing and very read­able, thanks for mak­ing it. I think peo­ple on here of­ten un­der­es­ti­mate the benefits of low hang­ing aes­thetic fruit.

• I just joined the com­mu­nity, how can I save or mark this ar­ti­cle so it is available for me to read at any­time?

• Book­marks in your browser. There’s also the dis­kette icon be­tween the two hori­zon­tal bars that sep­a­rate the ar­ti­cle and the com­ment sec­tion.

• I think the “liked” tab on your user page dis­plays pre­cisely those ar­ti­cles that you’ve up­voted. So up­vot­ing an ar­ti­cle will make it available there in the fu­ture.

• And down­vot­ing an ar­ti­cle will add it to the “dis­liked” tab. But please don’t vote ar­ti­cles solely for this pur­pose.

• Has the check­list been re­vis­ited or op­ti­mized in any way since its origi­nal for­mu­la­tion? (By CFAR or oth­er­wise?)

• I re­ally ap­pre­ci­ate hav­ing the ex­am­ples in paren­the­ses and ital­i­cised. It lets me eas­ily skip them when I know what you mean. I wish oth­ers would do this.

• Great list. My guide post for ra­tio­nal­ity and re­lated is­sues has been the works of Carl Sa­gan, as he had many books and good ad­vice for think­ing crit­i­cally. His works are an ab­solute must read (or watch) for any­body want­ing to wade through the mass of mis­di­rec­tion that ex­ists in the world.

• This all sounds quite groovy, but are there any sug­ges­tions on how I could go about im­ple­ment­ing them into my daily pat­tern of thought? I won­der if per­haps an Anki deck would have any merit what­so­ever in ac­com­plish­ing this...

• Why are these ra­tio­nal­ity habits? Based on what? All the ex­am­ples are per­sonal. Isn’t it pos­si­ble to give (also) a sci­en­tific ex­am­ples for each habit : study ….. shows that …. hence 1) the habit is use­ful for deal­ing with this bias 2) it doesn’t cre­ate or re­in­force other bi­ases.

• Another one: You see a way to do things that in the­ory might work bet­ter that what ev­ery­one else is do­ing, but in prac­tice no one seems to use. Do you in­ves­ti­gate it and con­sider ex­ploit­ing it?

Ex­am­ple: You’re try­ing to get karma on red­dit. You no­tice that http://​​www.red­dit.com/​​r/​​ran­dom­iza­tion/​​ has al­most a mil­lion sub­scribers but no new sub­mis­sions in the past two months. Do you think “hm, that’s weird” and keep look­ing for a sub­red­dit to sub­mit your link in, or do you think “oh wow, karma feast!”

• For each item, you might ask your­self: did you last use this habit...

Maybe it’s worth a poll, if some­one feels like cre­at­ing one. I’m not sure how to make a multi-level poll and it prob­a­bly would be too pre­sump­tu­ous of me to cre­ate 24 replies with one poll in each.

• It’s easy to make a check­list by go­ing to Google docs /​ Google drive, click­ing “cre­ate”, and choos­ing “form”.

• The Check­list Man­i­festo is very in­ter­est­ing about what goes into an ex­cel­lent check­list rather than a ca­su­ally con­structed check­list. It’s about in­sti­tu­tional check­lists rather than per­sonal check­lists, though.

• You can’t do multi-re­sponse polls? As in, check all that ap­ply?

• There are 24 sep­a­rate sub­ques­tions with 6 an­swer op­tions each.