SotW: Check Consequentialism

(The Ex­er­cise Prize se­ries of posts is the Cen­ter for Ap­plied Ra­tion­al­ity ask­ing for help in­vent­ing ex­er­cises that can teach cog­ni­tive skills. The difficulty is com­ing up with ex­er­cises in­ter­est­ing enough, with a high enough he­do­nic re­turn, that peo­ple ac­tu­ally do them and re­mem­ber them; this of­ten in­volves stand­ing up and perform­ing ac­tions, or in­ter­act­ing with other peo­ple, not just work­ing alone with an ex­er­cise book­let and a pen­cil. We offer prizes of $50 for any sug­ges­tion we de­cide to test, and $500 for any sug­ges­tion we de­cide to adopt. This prize also ex­tends to LW meetup ac­tivi­ties and good ideas for ver­ify­ing that a skill has been ac­quired. See here for de­tails.)


Ex­er­cise Prize: Check Consequentialism

In philos­o­phy, “con­se­quen­tial­ism” is the be­lief that do­ing the right thing makes the world a bet­ter place, i.e., that ac­tions should be cho­sen on the ba­sis of their prob­a­ble out­comes. It seems like the men­tal habit of check­ing con­se­quen­tial­ism, ask­ing “What pos­i­tive fu­ture events does this ac­tion cause?”, would catch nu­mer­ous cog­ni­tive fal­la­cies.

For ex­am­ple, the men­tal habit of con­se­quen­tial­ism would counter the sunk cost fal­lacy—if a PhD wouldn’t re­ally lead to much in the way of de­sir­able job op­por­tu­ni­ties or a higher in­come, and the only rea­son you’re still pur­su­ing your PhD is that oth­er­wise all your pre­vi­ous years of work will have been wasted, you will find your­self en­coun­ter­ing a blank screen at the point where you try to imag­ine a pos­i­tive fu­ture out­come of spend­ing an­other two years work­ing to­ward your PhD—you will not be able to state what good fu­ture events hap­pen as a re­sult.

Or con­sider the prob­lem of liv­ing in the should-uni­verse; if you’re think­ing, I’m not go­ing to talk to my boyfriend about X be­cause he should know it already, you might be able to spot this as an in­stance of should-uni­verse think­ing (plan­ning/​choos­ing/​act­ing/​feel­ing as though within /​ by-com­par­i­son-to an image of an ideal perfect uni­verse) by hav­ing done ex­er­cises speci­fi­cally to sen­si­tize you to should-ness. Or, if you’ve prac­ticed the more gen­eral skill of Check­ing Con­se­quen­tial­ism, you might no­tice a prob­lem on ask­ing “What hap­pens if I talk /​ don’t talk to my boyfriend?”—pro­vid­ing that you’re suffi­ciently adept to con­strain your con­se­quen­tial­ist vi­su­al­iza­tion to what ac­tu­ally hap­pens as op­posed to what should hap­pen.

Dis­cus­sion:

The skill of Check­ing Con­se­quen­tial­ism isn’t quite as sim­ple as tel­ling peo­ple to ask, “What pos­i­tive re­sult do I get?” By it­self, this men­tal query is prob­a­bly go­ing to re­turn any ap­par­ent jus­tifi­ca­tion—for ex­am­ple, in the sunk-cost-PhD ex­am­ple, ask­ing “What good thing hap­pens as a re­sult?” will just re­turn, “All my years of work won’t have been wasted! That’s good!” Any choice peo­ple are tempted by seems good for some rea­son, and ex­e­cut­ing a query about “good rea­sons” will just re­turn this.

The novel part of Check­ing Con­se­quen­tial­ism is the abil­ity to dis­crim­i­nate “con­se­quen­tial­ist rea­sons” from “non-con­se­quen­tial­ist rea­sons”—be­ing able to dis­t­in­guish that “Be­cause a PhD gets me a 50% higher salary” talks about fu­ture pos­i­tive con­se­quences, while “Be­cause I don’t want my years of work to have been wasted” doesn’t.

It’s pos­si­ble that ask­ing “At what time does the con­se­quence oc­cur and how long does it last?” would be use­ful for dis­t­in­guish­ing fu­ture-con­se­quences from non-fu­ture-con­se­quences—if you take a bad-thing like “I don’t want my work to have been wasted” and ask “When does it oc­cur, where does it oc­cur, and how long does it last?”, you will with luck no­tice the er­ror.

Learn­ing to draw cause-and-effect di­rected graphs, a la Judea Pearl and Bayes nets, seems like it might be helpful—at least, Ge­off was do­ing this while try­ing to teach strate­gic­ness and the class seemed to like it.

Some­times non-con­se­quen­tial­ist rea­sons can be res­cued as con­se­quen­tial­ist ones. “You shouldn’t kill be­cause it’s the wrong thing to do” can be res­cued as “Be­cause then a per­son will tran­si­tion from ‘al­ive’ to ‘dead’ in the fu­ture, and this is a bad event” or “Be­cause the in­ter­val be­tween Out­come A and Out­come B in­cludes the in­ter­val from Fred al­ive to Fred dead.”

On a five-sec­ond level, the skill would have to in­clude:

  • Be­ing cued by some prob­lem to try look­ing at the con­se­quences;

  • Either di­rectly hav­ing a men­tal pro­ce­dure that only turns up con­se­quences, like try­ing to vi­su­al­ize events out into the fu­ture, or

  • First ask­ing ‘Why am I do­ing this?’ and then look­ing at the jus­tifi­ca­tions to check if they’re con­se­quen­tial­ist, per­haps us­ing tech­niques like ask­ing ‘How long does it last?’, ‘When does it hap­pen?‘, or ‘Where does it hap­pen?’.

  • Ex­pend­ing a small amount of effort to see if a non-con­se­quen­tial­ist rea­son can eas­ily trans­late into a con­se­quen­tial­ist one in a re­al­is­tic way.

  • Mak­ing the de­ci­sion whether or not to change your mind.

  • If nec­es­sary, de­tach­ing from the thing you were do­ing for non-con­se­quen­tial­ist rea­sons.

In prac­tice, it may be ob­vi­ous that you’re mak­ing a mis­take as soon as you think to check con­se­quences. I have ‘liv­ing in the should-uni­verse’ or ‘sunk cost fal­lacy’ cached to the point where as soon as I spot an er­ror of that pat­tern, it’s usu­ally pretty ob­vi­ous (with­out fur­ther de­liber­a­tive thought) what the resi­d­ual rea­sons are and whether I was do­ing it wrong.

Pain points & Pluses:

(When gen­er­at­ing a can­di­date kata, al­most the first ques­tion we ask—di­rectly af­ter the se­lec­tion of a topic, like ‘con­se­quen­tial­ism’ - is, “What are the pain points? Or plea­sure points?” This can be er­rors you’ve made your­self and no­ticed af­ter­ward, or even cases where you’ve no­ticed some­one else do­ing it wrong, but ideally cases where you use the skill in real life. Since a lot of ra­tio­nal­ity is in fact about not screw­ing up, there may not always be plea­sure points where the skill is used in a non-er­ror-cor­rect­ing, strictly pos­i­tive con­text; but it’s still worth ask­ing each time. We ask this ques­tion right at the be­gin­ning be­cause it (a) checks to see how of­ten the skill is ac­tu­ally im­por­tant in real life and (b) pro­vides con­crete use-cases to fo­cus dis­cus­sion of the skill.)

Pain points:

Check­ing Con­se­quen­tial­ism looks like it should be use­ful for coun­ter­ing:

  • Liv­ing in the should-uni­verse (tak­ing ac­tions be­cause of the con­se­quences they ought to have, rather than the con­se­quences they prob­a­bly will have). E.g., “I’m not go­ing to talk to my girlfriend be­cause she should already know X” or “I’m go­ing to be­come a the­o­ret­i­cal physi­cist be­cause I ought to en­joy the­o­ret­i­cal physics.”

  • The sunk cost fal­lacy (choos­ing to pre­vent pre­vi­ously ex­pended, non-re­cov­er­able re­sources from hav­ing been wasted in ret­ro­spect—i.e., avoid­ing the men­tal pain of re­clas­sify­ing a past in­vest­ment as a loss—rather than act­ing for the sake of fu­ture con­sid­er­a­tions). E.g., “If I give up on my PhD, I’ll have wasted the last three years.”

  • Cached thoughts and habits; “But I usu­ally shop at Whole Foods” or “I don’t know, I’ve never tried an elec­tric tooth­brush be­fore.” (Th­ese might have res­cuable con­se­quences, but as stated, they aren’t talk­ing about fu­ture events.)

  • Act­ing-out an emo­tion—one of the most use­ful pieces of ad­vice I got from Anna Sala­mon was to find other ways to act out an emo­tion than strate­gic choices. If you’re feel­ing frus­trated with a coworker, you might still want to Check Con­se­quen­tial­ism on “Buy them dead flow­ers for their go­ing-away party” even though it seems to ex­press your frus­tra­tion.

  • Indig­na­tion /​ act­ing-out of morals—“Drugs are bad, so drug use ought to be ille­gal”, where it’s much harder to make the case that coun­tries which de­crim­i­nal­ized mar­ijuana ex­pe­rienced worse net out­comes. (Though it should be noted that you also have to Use Em­piri­cism to ask the ques­tion ‘What hap­pened to other coun­tries that de­crim­i­nal­ized mar­ijuana?’ in­stead of mak­ing up a gloomy con­se­quen­tial­ist pre­dic­tion to ex­press your moral dis­ap­proval.)

  • Iden­tity—“I’m the sort of per­son who be­longs in academia.”

  • “Try­ing to do things” for sim­ply no rea­son at all, while your brain still gen­er­ates ac­tivi­ties and ac­tions, be­cause no­body ever told you that be­hav­iors ought to have a pur­pose or that lack of pur­pose is a warn­ing sign. This habit can be in­cul­cated by school­work, want­ing to put in 8 hours be­fore go­ing home, etc. E.g. you “try to write an es­say”, and you know that an es­say has para­graphs; so you try to write a bunch of para­graphs but you don’t have any func­tional role in mind for each para­graph. “What is the pos­i­tive con­se­quence of this para­graph?” might come in handy here.

(This list is not in­tended to be ex­haus­tive.)

Plea­sure points:

  • Be­ing able to state and then fo­cus on a pos­i­tive out­come seems like it should im­prove mo­ti­va­tion, at least in cases where the pos­i­tive out­come is re­al­is­ti­cally at­tain­able to a non-frus­trat­ing de­gree and has not yet been sub­ject to he­do­nic adap­ta­tion. E.g., a $600 job may be more mo­ti­vat­ing if you vi­su­al­ize the $600 lap­top you’re go­ing to buy with the pro­ceeds.

Also, con­se­quen­tial­ism is the foun­da­tion of ex­pected util­ity, which is the foun­da­tion of in­stru­men­tal ra­tio­nal­ity—this is why we’re con­sid­er­ing it as an early unit. (This is not di­rectly listed as a “plea­sure point” be­cause it is not di­rectly a use-case.)

Con­stantly ask­ing about con­se­quences seems likely to im­prove over­all strate­gic­ness—not just lead to the bet­ter of two choices be­ing taken from a fixed de­ci­sion-set, but also hav­ing goals in mind that can gen­er­ate new per­ceived choices, i.e., im­prove the over­all de­gree to which peo­ple do things for rea­sons, as op­posed to not do­ing things or not hav­ing rea­sons. (But this is a hope­ful even­tual pos­i­tive con­se­quence of prac­tic­ing the skill, not a use-case where the skill is di­rectly be­ing ap­plied.)

Teach­ing & ex­er­cises:

This is the part that’s be­ing thrown open to Less Wrong gen­er­ally. Hope­fully I’ve de­scribed the skill in enough de­tail to con­vey what it is. Now, how would you prac­tice it? How would you have an au­di­ence prac­tice it, hope­fully in ac­tivi­ties car­ried out with each other?

The dumb thing I tried to do pre­vi­ously was to have ex­er­cises along the lines of, “Print up a book­let with lit­tle snip­pets of sce­nar­ios in them, and ask peo­ple to cir­cle non-con­se­quen­tial­ist rea­son­ing, then try to ei­ther trans­late it to con­se­quen­tial­ist rea­sons or say that no con­se­quen­tial­ist rea­sons could be found.” I didn’t do that for this ex­act ses­sion, but if you look at what I did with the sunk cost fal­lacy, it’s the same sort of silly thing I tried to do.

This didn’t work very well—maybe the ex­er­cises were too easy, or maybe it was that peo­ple were do­ing it alone, or maybe we did some­thing else wrong, but the au­di­ence ap­peared to ex­pe­rience in­suffi­cient he­do­nic re­turn. They were, in lay terms, un­en­thu­si­as­tic.

At this point I should like to pause, and tell a re­cent and im­por­tant story. On Satur­day I taught an 80-minute unit on Bayes’s Rule to an au­di­ence of non-Se­quence-read­ing ex­per­i­men­tal sub­jects, who were mostly ei­ther pro­gram­mers or in other tech­ni­cal sub­jects, so I could go through the math fairly fast. After­ward, though, I was wor­ried that they hadn’t re­ally learned to ap­ply Bayes’s Rule and wished I had a small lit­tle pam­phlet of prac­tice prob­lems to hand out. I still think this would’ve been a good idea, but...

On Wed­nes­day, I at­tended An­drew Critch’s course at Berkeley, which was roughly mostly-in­stru­men­tal LW-style cog­ni­tive-im­prove­ment ma­te­rial aimed at math stu­dents; and in this par­tic­u­lar ses­sion, Critch in­tro­duced Bayes’s The­o­rem, not as ad­vanced math, but with the aim of get­ting them to ap­ply it to life.

Critch demon­strated us­ing what he called the Really Get­ting Bayes game. He had Nisan (a lo­cal LWer) touch an ob­ject to the back of Critch’s neck, a cel­l­phone as it hap­pened, while Critch faced in the other di­rec­tion; this was “prior ex­pe­rience”. Nisan said that the ob­ject was ei­ther a cel­l­phone or a pen. Critch gave prior odds of 60% : 40% that the ob­ject was a cel­l­phone vs. pen, based on his prior ex­pe­rience. Nisan then asked Critch how likely he thought it was that a cel­l­phone or a pen would be RGB-col­ored, i.e., col­ored red, green, or blue. Critch didn’t give ex­act num­bers here, but said he thought a cel­l­phone was more likely to be pri­mary-col­ored, and drew some rec­t­an­gles on the black­board to illus­trate the like­li­hood ra­tio. After be­ing told that the ob­ject was in fact pri­mary-col­ored (the cel­l­phone was metal­lic blue), Critch gave pos­te­rior odds of 75% : 25% in fa­vor of the cel­l­phone, and then turned around to look.

Then Critch broke up the class into pairs and asked each pair to carry out a similar op­er­a­tion on each other: Pick two plau­si­ble ob­jects and make sure you’re hold­ing at least one of them, touch it to the other per­son while they face the other way, prior odds, ad­di­tional fact, like­li­hood ra­tio, pos­te­rior odds.

This is the sort of in-per­son, hands-on, real-life, and so­cial ex­er­cise that didn’t oc­cur to me, or Anna, or any­one else helping, while we were try­ing to de­sign the Bayes’s The­o­rem unit. Our brains just didn’t go in that di­rec­tion, though we rec­og­nized it as em­bar­rass­ingly ob­vi­ous in ret­ro­spect.

So… how would you de­sign an ex­er­cise to teach Check­ing Con­se­quen­tial­ism?