Communicating rationality to the public: Julia Galef’s “The Straw Vulcan”

Ju­lia Galef’s Skep­ti­con IV talk, The Straw Vul­can, is the best in­tro-to-ra­tio­nal­ity talk for the gen­eral pub­lic I’ve ever seen. Share the link with ev­ery­one you know!

Up­date: Below is the tran­script pre­pared by daen­erys:

Em­cee: You may rec­og­nize our next speaker from some­where—She was here ear­lier on our panel, she’s known for her work with the New York City Skep­tics and their pod­cast “Ra­tion­ally Speak­ing”, and she’s also the co-au­thor of the ra­tio­nal­ity blog “Mea­sure of Doubt” with her brother Jesse: Ju­lia Galef!

[ap­plause]

Ju­lia: Hey, it’s re­ally nice to be back. I’m so ex­cited to be giv­ing a talk at Skep­ti­con. Last year was my first year and I got to mod­er­ate a panel, so this is an ex­cit­ing new step for me. My talk to­day has been sort of or­gan­i­cally grow­ing over the last cou­ple of years as I’ve be­come more and more in­volved in the skep­tic and ra­tio­nal­ity move­ments, and I’ve got­ten more and more prac­tice, and learned many les­sons the hard way about com­mu­ni­cat­ing ideas about ra­tio­nal­ity and crit­i­cal think­ing and skep­ti­cism to peo­ple.

The ti­tle of the talk is “The Straw Vul­can: Hol­ly­wood’s Illog­i­cal View of Log­i­cal De­ci­sion-Mak­ing”. So if there’s any­one in the au­di­ence who doesn’t rec­og­nize this face, this is Mr. Spock…Some­one’s rais­ing their hands, but it’s a Vul­can salute, so I don’t be­lieve you don’t know him…So this is Mr. Spock; He’s one of the main char­ac­ters on Star Trek and he’s the First Officer and the Science Officer on the Star­ship En­ter­prise. And his mother is hu­man, but his father is Vul­can.

The Vul­cans are this race of aliens that are known for try­ing to live in strict ad­her­ence of the rules of rea­son and logic, and also for es­chew­ing emo­tion. This is some­thing I wasn’t clear on when I was re­mem­ber­ing the show from my child­hood, but it’s not that the Vul­cans don’t have emo­tion, it’s just that over time they’ve de­vel­oped very strict and suc­cess­ful ways of tran­scend­ing and sup­press­ing their emo­tions. So Spock be­ing half-Vul­can has more lapses than a pure-blood Vul­can, but still, on the show Star Trek he is “The Log­i­cal Char­ac­ter” and that makes up a lot of the in­ter-char­ac­ter dy­nam­ics and the sto­rylines on the show.

[2:30]

So, here’s Spock. Here are the Vul­cans. And I asked this ques­tion: “Vul­cans: Ra­tional Aliens?” with a ques­tion mark be­cause the brand of ra­tio­nal­ity that’s prac­ticed by Spock and his fel­low Vul­cans isn’t ac­tu­ally ra­tio­nal­ity. And that’s what my talk is go­ing to be about to­day.

This term, “Straw Vul­can”, I wish I could per­son­ally take credit for it, but I bor­rowed it from a web­site called TvTropes [au­di­ence cheers], Yes! TvTropes! Some of the high­est level of ra­tio­nal­ity that I can find on the in­ter­net, let alone on an­other pop cul­ture or tele­vi­sion blog. I highly recom­mend you check it out.

So they coined the term “Straw Vul­can” to re­fer to the type of fic­tional char­ac­ter who is sup­posed to be “The Log­i­cal One” or “The Ra­tional One”, but his brand of ra­tio­nal­ity is not real ra­tio­nal­ity. It’s sort of this car­i­ca­ture; this weak, gimpy car­i­ca­ture of ra­tio­nal­ity that…Well, es­sen­tially, you would think that if some­one were su­per-ra­tio­nal that they’d be run­ning cir­cles around all the other char­ac­ters, in the TV show or in the movie.

But be­cause it’s this sort of “fake” ra­tio­nal­ity that’s de­signed to demon­strate that the real suc­cess, the real route to glory and hap­piness and fulfill­ment, is all of these things that peo­ple con­sider to make us es­sen­tially hu­man, like our pas­sion, and our emo­tion, and our in­tu­ition, and yes, our ir­ra­tional­ity. And since that’s the point of the char­ac­ter, his brand of ra­tio­nal­ity is sort of this woe­ful char­ac­ter, and that’s why it’s called “A Straw Vul­can”.

Be­cause if you’re ar­gu­ing against some view­point that you dis­agree with, and you car­i­ca­ture that view­point in as sim­plis­tic and ex­ag­ger­ated a way as pos­si­ble, to make it easy for your­self to just knock it down and pre­tend that you’ve knocked that en­tire view­point down, that’s a “Straw Man”…So these are “Straw Vul­cans”.

As I was say­ing, Spock and his fel­low Straw Vul­cans play this role in their re­spec­tive TV shows and movies, of seem­ing like the char­ac­ter that should be able to save the day, but in prac­tice the day nor­mally gets saved by some­one like this. [Kirk slide][laugh­ter]

Yup! “I’m sorry I can’t hear you over the sound of how awe­some I am”

So my talk to­day is go­ing to be about Straw Vul­can ra­tio­nal­ity and how it di­verges from ac­tual ra­tio­nal­ity. And I think this is an im­por­tant sub­ject be­cause…It’s pos­si­ble that many of you in the au­di­ence have some mis­con­cep­tions about ra­tio­nal­ity that have been shaped by these Straw Vul­can char­ac­ters that are so preva­lent. And even if you haven’t it’s re­ally use­ful to un­der­stand the con­cepts that are in peo­ple’s minds when you talk to them about ra­tio­nal­ity.

Be­cause as I’ve learned the hard way again and again; Even if it’s so clear in your mind that ra­tio­nal­ity can make your life bet­ter, and can make the world bet­ter, if peo­ple are think­ing of Straw Vul­can ra­tio­nal­ity you’re never go­ing to have any im­pact on them. So it’s re­ally use­ful to un­der­stand the differ­ences be­tween what you’re think­ing of and what many other peo­ple are think­ing of when they talk about ra­tio­nal­ity.

First what I’m go­ing to do is define what I mean by “ra­tio­nal­ity”. This is ac­tual ra­tio­nal­ity. I’m just defin­ing this here be­cause I’m go­ing to re­fer back to it through­out my talk, and I want you to know what I’m talk­ing about.

There are two con­cepts that we use ra­tio­nal­ity to re­fer to. One of them is some­times called “epistemic ra­tio­nal­ity”, and it’s the method of ob­tain­ing an ac­cu­rate view of re­al­ity, es­sen­tially. So the method of rea­son­ing, and col­lect­ing ev­i­dence about the world, and up­dat­ing your be­liefs so as to make them as true as pos­si­ble, hew­ing as closely to what’s ac­tu­ally out there as pos­si­ble.

The other sense of the word ra­tio­nal­ity that we use is “in­stru­men­tal ra­tio­nal­ity”. This is the method of achiev­ing your goals, what­ever they are. They could be self­ish goals; They could be al­tru­is­tic goals. What­ever you care about and want to achieve, in­stru­men­tal ra­tio­nal­ity is defined as the method most likely to help you achieve them.

And ob­vi­ously they’re re­lated. It helps to have an ac­cu­rate view of re­al­ity if you want to achieve your goals, with very few ex­cep­tions… But I’m not go­ing to talk about that right now, I just want to define the con­cepts for you.

This is the first prin­ci­ple of Straw Vul­can Ra­tion­al­ity: Be­ing ra­tio­nal means ex­pect­ing other peo­ple to be ra­tio­nal too. This is the sort of thing that tends to trip up a Straw Vul­can, and I’m go­ing to give you an ex­am­ple:

This scene that’s about to take place, the Star­ship’s shut­tle has just crash-landed on this po­ten­tially hos­tile alien planet. Mr. Spock is in charge and he’s come up with this very ra­tio­nal plan, in his mind, that is go­ing to help them es­cape the wrath of the po­ten­tially ag­gres­sive aliens: They’re go­ing to dis­play their su­pe­rior force, and the aliens are go­ing to see that, and they’re go­ing to think ra­tio­nally: “Oh, they have more force than we do, so it would be against our best in­ter­ests to fight back and there­fore we won’t.” And this is what Spock does and it goes awry be­cause the aliens are an­gered by the dis­play of ag­gres­sion and they strike back.

This scene is tak­ing place be­tween Spock and McCoy, who’s like Spock’s foil. He’s the very emo­tional, pas­sion and in­tu­ition-driven doc­tor on the ship.

[7:45]

[video play­ing]

McCoy: Well, Mr. Spock, they didn’t stay fright­ened very long, did they?

Spock: Most illog­i­cal re­ac­tion. When we demon­strated our su­pe­rior weapons, they should have fled.

McCoy: You mean they should have re­spected us?

Spock: Of course!

McCoy: Mr. Spock, re­spect is a ra­tio­nal pro­cess. Did it ever oc­cur to you that they might re­act emo­tion­ally, with anger?

Spock: Doc­tor, I’m not re­spon­si­ble for their un­pre­dictabil­ity.

McCoy: They were perfectly pre­dictable. To any­one with feel­ing. You might as well ad­mit it, Mr. Spock. Your pre­cious logic brought them down on us!

[end video]

[8:45}

Ju­lia: So you see what hap­pens when you try to be log­i­cal…Peo­ple die!

Ex­cept of course, ex­actly the op­po­site. This is ir­ra­tional­ity, not ra­tio­nal­ity. Ra­tion­al­ity is about hav­ing as ac­cu­rate a view of the world as pos­si­ble and also about achiev­ing your goals. And clearly Spock has per­sis­tent ev­i­dence ac­cu­mu­lated again and again over time that other peo­ple are not ac­tu­ally perfectly ra­tio­nal, and he’s just willfully ne­glect­ing the ev­i­dence; The ex­act op­po­site of epistemic ra­tio­nal­ity. Of course it also leads to the op­po­site of in­stru­men­tal ra­tio­nal­ity too, be­cause if peo­ple be­have con­stantly the op­po­site of what you ex­pect them to, you can’t pos­si­bly make de­ci­sions that are go­ing to be achiev­ing your goals.

So this con­cept of ra­tio­nal­ity, or this par­tic­u­lar tenet of Straw Vul­can Ra­tion­al­ity can be found out­side of Star Trek as well. I was sort of sur­prised by the prevalence of that, but I’ll give you an ex­am­ple: This was an ar­ti­cle ear­lier this year in In­foWorld and ba­si­cally the ar­ti­cle is mak­ing the ar­gu­ment that one of the big prob­lems with Google, and Microsoft, and Face­book, is that the en­g­ineers there don’t re­ally un­der­stand that their cus­tomers don’t have the same wor­ld­view and val­ues and prefer­ences that they do.

For ex­am­ple, if you re­mem­ber the de­ba­cle that was Google Buzz; It was a huge pri­vacy dis­aster be­cause it signed you up au­to­mat­i­cally and then as soon as you were signed up all of your close per­sonal con­tacts, like the friends that you emailed the most, sud­denly got broad­cast pub­li­cly to all your other friends. So the au­thor of this ar­ti­cle was ar­gu­ing that, “Well, peo­ple at Google don’t care about this pri­vacy, and so it didn’t oc­cur to them that other peo­ple in the world would ac­tu­ally care about pri­vacy.”

And there’s noth­ing wrong with that ar­gu­ment. That’s a fine point to make. Ex­cept, he ti­tled the ar­ti­cle: “Google’s biggest prob­lem is that it’s too ra­tio­nal”, which is ex­actly the same prob­lem as the last ex­am­ple. That is they’re too Straw Vul­can Ra­tional, which is ir­ra­tional.

This is an­other ex­am­ple from a friend of mine who’s also a skep­tic writer and au­thor of sev­eral re­ally good books. This is Dan Gard­ner; he wrote Fu­ture Ba­bel, and he’s spo­ken at North-East Con­fer­ence of Science and Skep­ti­cism where I’ve also mod­er­ated and or­ga­nized last year, and he’s great! He’s re­ally smart. But on his blog I found this ar­ti­cle that he wrote about how… he was crit­i­ciz­ing an economist who was mak­ing the ar­gu­ment that the best way to fight crime would be to make harsher penalties be­cause that would be a de­ter­rent and that would re­duce crime, be­cause peo­ple re­spond to in­cen­tives.

And Dan said, “well that would make sense ex­cept for that the em­piri­cal ar­gu­ment shows that crime rates don’t nearly re­spond quite as much to de­ter­rent in­cen­tives as we think they do and so this economist is failing to up­date his model on how he thinks peo­ple should be­have based on how the ev­i­dence sug­gests they ac­tu­ally do be­have.” Which again is fine, ex­cept that his con­clu­sion was; “Don’t Be Too Ra­tional About Crime Policy.” So it’s ex­actly the same kind of think­ing.

It’s sort of a se­man­tic point, in that he’s defin­ing ra­tio­nal­ity in this weird way, al­though I’m not dis­agree­ing with his ac­tual ar­gu­ment. But it’s this kind of think­ing about ra­tio­nal­ity that can be detri­men­tal in the long run.

This is the sec­ond prin­ci­ple of Straw Vul­can ra­tio­nal­ity: Be­ing ra­tio­nal means you should never make a de­ci­sion un­til you have all the in­for­ma­tion.

I’ll give you an ex­am­ple. So I couldn’t find a clip of this, un­for­tu­nately, but this scene takes place in an epi­sode called “The Im­mu­nity Syn­drome” in sea­son 2, and ba­si­cally peo­ple on the Star­ship En­ter­prise are fal­ling ill mys­te­ri­ously in droves, and there’s this weird high-pitched sound that they’re ex­pe­rienc­ing that’s mak­ing them nau­se­ated and Kirk and Spock see this big black blob on their screen and they don’t know what it is… It turns out it’s a gi­ant space amoeba…Of course!

But at this point early in the epi­sode they don’t re­ally know much about it and so Kirk turns to Spock for in­put, for ad­vice, for his opinion on what he thinks this thing is and what they should do. And Spock’s re­sponse is: “I have no anal­y­sis due to in­suffi­cient in­for­ma­tion…The com­put­ers con­tain noth­ing on this phe­nomenon. It is be­yond our ex­pe­rience, and the new in­for­ma­tion is not yet sig­nifi­cant.”

It’s great to be loathe, to be hes­i­tant to make a de­ci­sion based on small amounts of ev­i­dence that isn’t yet sig­nifi­cant if you have a rea­son­able amount of time. But there are snap judg­ments that need to be made all the time, and you have to de­cide be­tween pay­ing the cost of all of the ad­di­tional in­for­ma­tion that you want (and that cost could be in time or in money or in risk, if wait­ing is forc­ing you to in­cur more risk.) or just act­ing based on what you have at the mo­ment.

The ra­tio­nal ap­proach, what a ra­tio­nal­ist wants to do, is to max­i­mize his…es­sen­tially to make sure he has the best pos­si­ble ex­pected out­come. The way to do that is not to always wait un­til you have all the in­for­ma­tion, but to weigh the cost of the in­for­ma­tion against how much do you think you’re go­ing to get from get­ting that in­for­ma­tion.

We all know this in­tu­itively in other ar­eas of life; Like you don’t want the best sand­wich you can get, you want the best sand­wich rel­a­tive to how much you have to pay for it. So you’d be will­ing to spend an ex­tra dol­lar in or­der to make your sand­wich a lot bet­ter, but if you had to spend $300 to make your sand­wich slightly bet­ter, that wouldn’t be worth it. You wouldn’t ac­tu­ally be op­ti­miz­ing if you paid those $300 to make your sand­wich slightly bet­ter.

And again, this phe­nomenon, this in­ter­pre­ta­tion of ra­tio­nal­ity, I found out­side of Star Trek as well. Gerd Gigeren­zer is a very well re­spected psy­chol­o­gist, but this is him de­scribing how a ra­tio­nal ac­tor would find a wife:

“He would have to look at the prob­a­bil­ities of var­i­ous con­se­quences of mar­ry­ing each of them—whether the woman would still talk to him af­ter they’re mar­ried, whether she’d take care of their chil­dren, what­ever is im­por­tant to him—and the util­ites of each of these…After many years of re­search he’d prob­a­bly find out that his fi­nal choice had already mar­ried an­other per­son who didn’t do these com­pu­ta­tions, and ac­tu­ally just fell in love with her.”

So Gerd Gigeren­zer is a big critic of the idea of ra­tio­nal de­ci­sion mak­ing, but as far as I can tell, one of the rea­sons he’s a critic is be­cause this is how he defines ra­tio­nal de­ci­sion mak­ing. Clearly this isn’t ac­tual op­ti­mal de­ci­sion mak­ing. Clearly some­one who’s ac­tu­ally in­ter­ested in max­i­miz­ing their even­tual out­come would take into ac­count the fact that do­ing years of re­search would limit the amount of women who would still be available and ac­tu­ally in­ter­ested in dat­ing you af­ter all of that re­search was said and done.

This is Straw Vul­can Ra­tion­al­ity Prin­ci­ple num­ber 3: Be­ing ra­tio­nal means never rely­ing on in­tu­ition.

Here’s an ex­am­ple. This is Cap­tain Kirk, this is in the origi­nal se­ries, and he and Spock are play­ing a game of three-di­men­sional chess.

[16:00]

[video starts, but there’s no sound]

Ju­lia as Kirk: Check­mate! (He said)

Ju­lia as Spock: Your illog­i­cal ap­proach to chess does have its ad­van­tages on oc­ca­sion, Cap­tain.

[end video]

[laugh­ter and ap­plause]

Ju­lia: Um, let me just check my sound—Max­i­mize my long-term ex­pected out­come in this pre­sen­ta­tion by in­cur­ring a short-term cost. Well, we’ll hope that doesn’t hap­pen again.

Any­way, so clearly an ap­proach that causes you to win at chess can­not by any sen­si­ble defi­ni­tion be called an illog­i­cal way of play­ing chess. But from the per­spec­tive of Straw Vul­can Ra­tion­al­ity it can, be­cause any­thing in­tu­ition-based is illog­i­cal in Straw Vul­can ra­tio­nal­ity.

Essen­tially there are two sys­tems that peo­ple use to make de­ci­sions. They’re rather bor­ingly called Sys­tem 1 and Sys­tem 2, but they’re more col­lo­quially known as the in­tu­itive sys­tem of rea­son­ing and the de­liber­a­tive sys­tem of rea­son­ing.

The in­tu­itive sys­tem of rea­son­ing is an older sys­tem; it al­lows us to make au­to­matic judg­ments, to make judg­ments us­ing short­cuts which are some­times known as heuris­tics. They’re sort of use­ful rules of thumb for what’s go­ing to work, that don’t always work. But they’re good enough most of the time. They don’t re­quire a lot of cog­ni­tive pro­cess­ing abil­ity or mem­ory or time or at­ten­tion.

And then Sys­tem 2, the de­liber­a­tive sys­tem of rea­son­ing, is much more re­cently evolved. It takes a lot more cog­ni­tive re­sources, a lot more at­ten­tion, but it al­lows us to do more ab­stract crit­i­cal think­ing. It al­lows us to con­struct mod­els of what might hap­pen when it’s some­thing that hasn’t hap­pened be­fore, whereas say a Sys­tem 1 ap­proach would de­cide what to do based on how things hap­pened in the past.

Sys­tem 2 is much more use­ful when you can’t ac­tu­ally safely rely on prece­dence and you ac­tu­ally think. “What are the pos­si­ble fu­ture sce­nar­ios and what would likely hap­pen if I be­haved in a cer­tain way in each of those sce­nar­ios?” That’s Sys­tem 2.

Sys­tem 1 is more prone to bias. Eliezer Yud­kowsky gave a great talk ear­lier this morn­ing about some of the bi­ases that we can fall prey to, es­pe­cially when we’re en­gag­ing in Sys­tem 1 rea­son­ing. But that doesn’t mean that it’s always the wrong sys­tem to use.

I’ll give you a cou­ple ex­am­ples of Sys­tem 1 rea­son­ing be­fore I go any farther. So there’s a prob­lem that logic teach­ers some­times give to their stu­dents. It’s a very sim­ple prob­lem; They say a bat and a ball to­gether add up to $1.10. The bat costs a dol­lar more than the ball. How much does the ball cost?

The in­tu­itive Sys­tem 1 an­swer to that ques­tion in 10 cents, be­cause you look at $1.10, and you look at $1 , and you take away the dol­lar and you get 10 cents. But if the ball was ac­tu­ally 10 cents and the bat was ac­tu­ally a dol­lar, then the bat would not cost a dol­lar more than the ball. So es­sen­tially that’s the kind of an­swer you get when you’re not re­ally think­ing about the prob­lem, you’re just feel­ing around for…“Well, what do prob­lems like this gen­er­ally in­volve? Well, you gen­er­ally take one thing away from an­other thing, so I dunno, do that.”

In fact, when this prob­lem was given to a class at Prince­ton, 50% of them got the wrong an­swer. It just shows how quickly we reach for our Sys­tem 1 an­swer and how rarely we feel the need to ac­tu­ally go back and check in, in a de­liber­a­tive fash­ion.

Another ex­am­ple of Sys­tem 1 rea­son­ing that I re­ally like is…you may have heard of this clas­sic so­cial psy­chol­ogy ex­per­i­ment in which re­searchers sent some­one to wait in line at a copy ma­chine, and they asked the per­son ahead of them; “Ex­cuse me, do you mind if I cut in line?” And maybe about 50% or 40% of them agreed to let the ex­per­i­menters’’ plant cut ahead of them.

But when the ex­per­i­menters re­did the study, and this time in­stead of say­ing “Can I cut in front of you?”, they said “Can I cut in front of you be­cause I need to make copies?” Then the agree­ment rate went up to like 99%. Some­thing re­ally high.

And there’s liter­ally….Like of course they need to make copies! That’s the only rea­son they would have to cut in line to a copy ma­chine. Ex­cept, be­cause the re­quest was phrased in terms of giv­ing a rea­son, our sys­tem 1 rea­son­ing kicks in and we go “Oh, they have a rea­son! So, sure! You have a rea­son.”

[21:00]

Sys­tem 1 and Sys­tem 2 have their pros and cons in differ­ent con­texts. Sys­tem 1 is es­pe­cially good when you have a short time span, and a limited amount of re­sources and at­ten­tion to de­vote to a prob­lem. It’s also good when you know that you have ex­pe­rience and mem­ory that’s rele­vant to the ques­tion, but it’s not that eas­ily ac­cessible; like you’ve had a lot of ex­pe­riences of things like this prob­lem, but our mem­o­ries aren’t stored in this easy list where we can sort ac­cord­ing to key words and find the mean of the num­ber of items in our mem­ory base. So you have in­for­ma­tion in there, and re­ally the only way to ac­cess it some­times is to rely on your in­tu­ition. It’s also helpful when there are im­por­tant fac­tors that go into a de­ci­sion that are hard to quan­tify.

There are a num­ber of re­cent stud­ies which have been ex­plor­ing when Sys­tem 1 rea­son­ing is suc­cess­ful, and it tends to be suc­cess­ful when peo­ple are mak­ing pur­chas­ing de­ci­sions or other de­ci­sions about their per­sonal life. And there are a lot of fac­tors in­volved; there’s dozens of fac­tors rele­vant to what car you buy that you could con­sider, but a lot of what makes you happy with your pur­chase or your choice is just your per­sonal lik­ing of the car. And that’s not the sort of thing that’s easy to quan­tify.

When peo­ple try to think about us­ing their Sys­tem 2 rea­son­ing they don’t re­ally know how to quan­tify their lik­ing of the car, and so when they rely on Sys­tem 2 they of­ten tend to just look at the mileage, and the cost, and all these other things, which are im­por­tant but they don’t re­ally get at that emo­tional prefer­ence about the car. So that kind of in­for­ma­tion can be helpfully drawn out by Sys­tem 1 rea­son­ing.

Also if you’re an ex­pert in a field, say chess for ex­am­ple, you can eas­ily win over some­one who’s us­ing care­ful de­liber­a­tive rea­son­ing just based on all of your ex­pe­rience; You’ve built up this in­cred­ible pat­tern recog­ni­tion abil­ity with chess. So a chess mas­ter can just walk past a chess board, glance at it, and say “Oh, white’s go­ing to check­mate black in three moves.” Or chess mas­ters can play many differ­ent chess games at once and win them all. And ob­vi­ously they don’t have the cog­ni­tive re­sources to de­vote to each game fully, but their au­to­matic pat­tern recog­ni­tion sys­tem that they’ve built up over thou­sands and thou­sands of chess games works just well enough.

In­tu­ition is less re­li­able in cases where the kinds of heuris­tics or the kinds of bi­ases that Eliezer spoke about ear­lier are rele­vant, or when you have a good rea­son to be­lieve that your in­tu­ition is based on some­thing that isn’t rele­vant to the task at hand. So if you’re us­ing your in­tu­ition to say how likely work in ar­tifi­cial in­tel­li­gence is go­ing to be to lead to some sort of global dis­aster, you might rely on your in­tu­ition. But you also have to think about the fact that your in­tu­ition in this case is prob­a­bly shaped by fic­tion. There are a lot more sto­ries about robot apoc­a­lypses and AI ex­plo­sions that took over the world than there are sto­ries about AI go­ing in a nice, bor­ing, pleas­ant way. So be­ing able to rec­og­nize where your in­tu­ition comes from can help you de­cide when it’s a good guide in a par­tic­u­lar con­text.

Sys­tem 2 is bet­ter when you have more re­sources and more time. It’s also good, as I men­tioned, in new and un­prece­dented situ­a­tions, new and un­prece­dented de­ci­sion mak­ing con­texts, where you can’t just rely on pat­terns of what’s worked in the past. So a prob­lem like global warm­ing, or a prob­lem like other ex­is­ten­tial risks that face our world, our species, po­ten­tial of a nu­clear war…We don’t re­ally have prece­dence to draw on, so it’s hard to think that we can rely on our in­tu­ition to tell us what’s go­ing to hap­pen or what we should do. And Sys­tem 2 tends to be worse when there are many, many fac­tors to con­sider and we don’t have the cog­ni­tive abil­ity to con­sider them all fairly.

But the main take­away of the Sys­tem 1/​Sys­tem 2 com­par­i­son is that both sys­tems have their strengths and weak­nesses. And ra­tio­nal­ity is about try­ing to find the truest path to an ac­cu­rate pic­ture of re­al­ity, and it’s about try­ing to find what ac­tu­ally max­i­mizes your own hap­piness or what­ever goal you have. So what you do is you don’t rely on one or the other blindly. You de­cide: Based on this con­text, which method is go­ing to be the most likely one to get me to what I want? The truth, or what­ever other goals I have.

And I think that a lot of the times when you hear peo­ple say that it’s pos­si­ble to be too ra­tio­nal, what they’re re­ally talk­ing about is that it’s pos­si­ble to use Sys­tem 2 de­liber­a­tive rea­son­ing in con­texts where it’s in­ap­pro­pri­ate, or to use it poorly.

Here’s a real life ex­am­ple: This is a head­line ar­ti­cle that came out ear­lier this year. So if you can’t read it, it says “Is the Teen Brain too Ra­tional?” And the ar­gu­ment of the ar­ti­cle (It was ac­tu­ally a study) and it found that teenagers, when they were de­cid­ing to take some risks like to do drugs or to drive above the speed limit, they of­ten do what is tech­ni­cally Sys­tem 2 rea­son­ing, so they’ll think about the pros and cons, and think about what the risks are likely to be.

But the rea­son they do it any­ways is be­cause they’re re­ally bad at this Sys­tem 2 rea­son­ing. They poorly weigh the risks and the benefits and that’s why they end up do­ing stupid things. So the con­clu­sion I would draw from that is: Teens are bad at sys­tem 2 rea­son­ing. The con­clu­sion the au­thor drew from that is that teens are too ra­tio­nal.

Another ex­am­ple: I found this quote when I was Googling around for ex­am­ples to use in this talk, and I found what I thought was a perfect quote illus­trat­ing this prin­ci­ple that I’m try­ing to de­scribe to you:

“It is there­fore equally un­bal­anced to be mostly “in­tu­itive” (i.e. ig­nor­ing that one’s first im­pres­sion can be wrong), or too ra­tio­nal (i.e. ig­nor­ing one’s hunches as surely mis­guided)”

Here I would say if you ig­nore your hunches blindly and as­sume they’re mis­guided then you’re not be­ing ra­tio­nal, you’re be­ing ir­ra­tional. And so I was hap­pily copy­ing down the quote, be­fore hav­ing looked at the au­thor. Then I check to see who the au­thor of the post was and it’s the co-host of my pod­cast, “Ra­tion­al­lly Speak­ing”, Mas­simo Pigliucci, who I am very fond of, and prob­a­bly go­ing to get in trou­ble with now. But I couldn’t pass up this perfect ex­am­ple, and this is just how com­mit­ted I am to teach­ing you guys about true ra­tio­nal­ity that I will brave the wrath of that Ital­ian man there.

So Straw Vul­can Ra­tion­al­ity Prin­ci­ple num­ber four: Be­ing ra­tio­nal means not hav­ing emo­tions.

And this is some­thing I want to fo­cus on a lot, be­cause I think the por­trayal of ra­tio­nal­ity and emo­tions by Spock’s ver­sion, by the Straw Vul­can ver­sion of ra­tio­nal­ity, is definitely con­fused, is definitely wrong. But I think the truth is nu­anced and com­pli­cated, so I want to draw this one out a lit­tle bit more.

But first, a clip

[video]

Ju­lia: Oh! Spock thinks the cap­tain’s dead.

Spock: Doc­tor, I shall be re­sign­ing my com­mis­sion im­me­di­ately of course, so I would ap­pre­ci­ate your mak­ing the fi­nal ar­range­ments.

McCoy: Spock, I…

Spock: Doc­tor, please. Let me finish. There can be no ex­cuse for the crime of which I am guilty. I in­tend to offer no defense. Fur­ther­more, I will or­der Mr. Scot to take im­me­di­ate com­mand of this ves­sel.

Kirk: (walk­ing up from be­hind) Don’t you think you bet­ter check with me, first?

Spock: Cap­tain! Jim! (Big smile, then re­gains con­trol)….I’m…pleased…to see you again, Cap­tain. You seem…un­in­jured.

[end video]

[29:15]

Ju­lia: So he al­most slipped up there, but he caught him­self just in time. Hope­fully none of the other Vul­cans found out about it.

This is es­sen­tially the Spock model of how emo­tions and ra­tio­nal­ity re­late to each other: You have a goal, and use ra­tio­nal­ity, un­en­cum­bered by emo­tion, to figure out what ac­tion to take to achieve that goal. Then emo­tion can get in the way and screw up this pro­cess if you’re not re­ally care­ful. This is the Spock model. And it’s not wrong per se. Emo­tions can clearly, and fre­quently do, screw up at­tempts at ra­tio­nal de­ci­sion mak­ing.

I’m sure you all have anec­do­tal ex­am­ples just like I do, but to throw some out there; if you’re re­ally an­gry it can be hard to rec­og­nize the clear truth that lash­ing out at the per­son you’re an­gry at is prob­a­bly not go­ing to be a good idea for you in the long run. Or if you’re in love it can be hard to rec­og­nize the ways in which you are com­pletely in­com­pat­i­ble with this other per­son and that you’re go­ing to be re­ally un­happy with this per­son in the long run if you stay with them. Or if you’re dis­gusted and ir­ri­tated by hip­pies, it can be hard to ob­jec­tively eval­u­ate ar­gu­ments that you as­so­ci­ate with hip­pies, like say crit­i­cisms of cap­i­tal­ism.

Th­ese are just a few ex­am­ples. Th­ese are just anec­do­tal ex­am­ples, but there’s plenty of ex­per­i­men­tal re­search out there that demon­strates that peo­ple’s ra­tio­nal de­ci­sion mak­ing abil­ities suffer when they’re in states of height­ened emo­tion. For ex­am­ple when peo­ple are anx­ious they over-es­ti­mate risks by a lot. When peo­ple are de­pressed, they un­der-es­ti­mate how much they are go­ing to en­joy some fu­ture ac­tivity that’s pro­posed to them.

And then there’s a se­ries of re­ally in­ter­est­ing stud­ies by a cou­ple of psy­chol­o­gists named Woods­man and Gal­in­ski that demon­strate that when peo­ple are feel­ing threat­ened or vuln­er­a­ble, or like they don’t have con­trol, they tend to be much more su­per­sti­tious; they per­ceive pat­terns where there are no pat­terns; they’re likely to be­lieve con­spir­acy the­o­ries; they’re more likely to see pat­terns in com­pa­nies and fi­nan­cial data that aren’t ac­tu­ally there; and they’re more likely to in­vest, to put their own money down, based on these non-ex­is­tent pat­terns that they thought they saw.

So Spock is not ac­tu­ally wrong. The prob­lem with this model is that it is just in­com­plete. And the rea­son it’s in­com­plete is that “Goal” box. Where does that “Goal” box come from? It’s not handed down to us from on high. It’s not sort of writ­ten into the fabric of the uni­verse. The only real rea­son rea­son that you have goals is be­cause you have emo­tions—be­cause you care about some out­comes of the world’s more than oth­ers; be­cause you feel pos­i­tively about some po­ten­tial out­comes and nega­tively about other po­ten­tial out­comes.

If you re­ally didn’t care about any po­ten­tial state of the world more or less than other po­ten­tial state of the world, it wouldn’t mat­ter how skil­led your rea­son­ing abil­ities were, you’d never have rea­son to do any­thing. Essen­tially you’d just look like this… “Meh!” I mean even ra­tio­nal­ity for its own sake isn’t re­ally co­her­ent with­out some emo­tion, be­cause if you want to do ra­tio­nal­ity, if you want to be ra­tio­nal, it’s be­cause you care more about hav­ing the truth than you do about be­ing ig­no­rant.

Emo­tions are clearly nec­es­sary for form­ing the goals, ra­tio­nal­ity is sim­ply lame with­out them. But there’s also some in­ter­est­ing ev­i­dence that emo­tions are im­por­tant for mak­ing the de­ci­sions them­selves.

There’s a psy­chol­o­gist named An­to­nio De­ma­sio who stud­ies pa­tients with brain dam­age to a cer­tain part of their brain…Ven­tral pari­etal frontal cor­tex…I can’t re­mem­ber the name, but es­sen­tially it’s part of the brain that’s cru­cial for re­act­ing emo­tion­ally to one’s thoughts.

The pa­tients who suffered from this in­jury were perfectly un­dam­aged in other ways. They could perform just as well on tasks on vi­sual per­cep­tion, and lan­guage pro­cess­ing, and prob­a­bil­is­tic rea­son­ing, and all these other forms of de­liber­a­tive rea­son­ing and other senses. But their lives very quickly fell apart af­ter this in­jury, be­cause when they were mak­ing de­ci­sions they couldn’t ac­tu­ally simu­late viscer­ally what the value was to them of the differ­ent op­tions. So their jobs fell apart, their in­ter­per­sonal re­la­tions fell apart, and also a lot of them be­came in­cred­ibly in­de­ci­sive.

De­ma­sio tells the story of one pa­tient of his, who, when he left the doc­tor’s office De­ma­sio gave him the choice of a pen or a wallet… Some cheap lit­tle wallet, what­ever you want… And the pa­tient sat there for about twenty min­utes try­ing to de­cide. Fi­nally he picked the wallet, but when he went home he left a mes­sage on the doc­tor’s voice­mail say­ing “I changed my mind. Can I come back to­mor­row and take the pen in­stead of the wallet?”

And the prob­lem is that the way we make de­ci­sions is we sort of query our brains to see how we feel about the differ­ent op­tions, and if you can’t feel, then you just don’t know what to do. So it seems like there’s a strong case that emo­tions are es­sen­tial for this ideal de­ci­sion-mak­ing pro­cess, not just in form­ing your goals, but in ac­tu­ally weigh­ing your differ­ent op­tions in the con­text for a spe­cific de­ci­sion.

This is the first re­vi­sion I would make to the model of Straw Vul­can de­ci­sion-mak­ing. And this is sort of the stan­dard model for ideal de­ci­sion-mak­ing as say eco­nomics for­mu­lates it. You have your val­ues. (Eco­nomics doesn’t par­tic­u­larly care what they are.) But the way eco­nomics for­mu­lates a ra­tio­nal ac­tor is some­one who acts in such a way as to max­i­mize their chances of get­ting what they value, what­ever that is.

And again, that’s a pretty good model. It’s not a bad sim­plifi­ca­tion of what’s go­ing on. But the thing about this model is that it takes your emo­tional de­sire as a given. It just says: “Given what you de­sire, what’s the best way to get it?” And we don’t have to take our de­sires as a given. In fact, I think this is where ra­tio­nal­ity comes back into the equa­tion. We can ac­tu­ally use ra­tio­nal­ity to think about our in­stinc­tual emo­tional de­sires, and as a con­se­quence of them, the things that we value: our goals. And think about what makes sense ra­tio­nally.

It’s a lit­tle bit of a con­tro­ver­sial state­ment. Some psy­chol­o­gists and philoso­phers would say that emo­tions and de­sires can’t be ra­tio­nal or ir­ra­tional; you just want what you want. And cer­tainly they can’t be ra­tio­nal or ir­ra­tional in the same way that the leaps can’t be ra­tio­nal or ir­ra­tional. Some philoso­phers might ar­gue about this, but I would say that you can’t be wrong about what you want.

But I think there’s still a strong case to be made for some emo­tions be­ing ir­ra­tional, and if you think back to the two defi­ni­tions of ra­tio­nal­ity that I gave you ear­lier; There was epistemic ra­tio­nal­ity which was about mak­ing your be­liefs about the world as true as pos­si­ble, and there was in­stru­men­tal ra­tio­nal­ity, which was about max­i­miz­ing your chances of get­ting what you want, what­ever that is. So I think it makes sense to talk about emo­tions as be­ing epistem­i­cally ir­ra­tional, if they’re im­plic­itly, at their core, based on a false model of the world.

And this hap­pens all the time. For ex­am­ple, you might be an­gry at your hus­band for not ask­ing how this pre­sen­ta­tion at work went. It was a re­ally im­por­tant pre­sen­ta­tion, and you can’t be­lieve he didn’t ask you. And that anger is pred­i­cated on the as­sump­tion, whether con­scious or not, that he should have known that was im­por­tant. That he should have known that this was an im­por­tant pre­sen­ta­tion to you. But if you ac­tu­ally take a step back and think about it, it could be that no, you never ac­tu­ally ever gave him any in­di­ca­tion that this was im­por­tant, and that you were wor­ried about it. So then that would make that emo­tion ir­ra­tional, be­cause it’s based on a false model of re­al­ity.

Or for ex­am­ple you might feel guilty about some­thing. Even though when you con­sciously think about it, you would have to ac­knowl­edge that you know you did noth­ing to cause it and that there was noth­ing you could have done to pre­vent it. So I would be in­clined to call that guilt also epistem­i­cally ir­ra­tional.

Or for ex­am­ple, peo­ple might feel de­pressed be­cause that’s pred­i­cated on the as­sump­tion that there’s noth­ing they could do to bet­ter their situ­a­tion, and some­times that might be true, but a lot of the times it’s not. I would call that also an ir­ra­tional emo­tion, be­cause you may have some false be­liefs about your ca­pa­bil­ities of im­prov­ing what­ever the prob­lem is. That’s epistemic ir­ra­tional­ity.

Emo­tions can clearly be in­stru­men­tally ir­ra­tional if they’re mak­ing you worse off. Some­thing like jeal­ousy, or spite, or rage, or envy is un­pleas­ant to you and it’s not ac­tu­ally in­spiring you to make any pos­i­tive changes in your life. And it’s per­haps caus­ing rifts with peo­ple you care about, and mak­ing you less happy that way, then I’d say that’s pretty clearly pre­vent­ing you from achiev­ing your goals.

Emo­tions can be in­stru­men­tally and epistem­i­cally ir­ra­tional. Us­ing ra­tio­nal­ity is what helps us rec­og­nize that and shape our goals based not on what our au­to­matic emo­tional de­sires are, but on what our ra­tio­nal­ity-filtered emo­tional de­sires are. I put sev­eral emo­tional de­sires here, be­cause an­other role that ra­tio­nal­ity plays in this ideal de­ci­sion pro­cess is rec­og­niz­ing when you have con­flict­ing de­sires, and weigh­ing them against each other; De­cid­ing which is more im­por­tant to you and one of them can be changed, etc, etc.

For ex­am­ple you might value be­ing the kind of per­son who tells the truth, but you also might value be­ing the kind of per­son that’s liked by peo­ple. So you have to have some kind of way of weigh­ing those two de­sires against each other be­fore you de­cide what your goal in a par­tic­u­lar situ­a­tion ac­tu­ally is.

This would be my next up­date to the Straw Vul­can model of emo­tion and ra­tio­nal­ity. But you can ac­tu­ally im­prove it a lit­tle bit more too. You can change your emo­tions us­ing ra­tio­nal­ity. This is not that easy and it can of­ten take some time. But it’s definitely some­thing that we know how to do, at least in limited ways.

For ex­am­ple there’s a field of psy­chother­apy called cog­ni­tive ther­apy, some­times com­bined with be­hav­ioral tech­niques and called cog­ni­tive be­hav­ioral ther­apy. Their motto, if you can call it that, is “Chang­ing the way you think can change the way you feel.” They have all of these tech­niques and ex­er­cises you can do to get over de­pres­sion and anger, or anx­iety, and other in­stru­men­tally and of­ten epistem­i­cally ir­ra­tional emo­tions.

Here’s an ex­am­ple: This is a cog­ni­tive ther­apy work­sheet. A “Thought Record” is one of the most com­mon ex­er­cises that cog­ni­tive ther­apy has their pa­tients do, and it’s com­mon sense, es­sen­tially. It’s about writ­ing down and not­ing your thoughts when your emo­tions start to run away with you, or run away with them­selves. And then stop­ping, ask­ing “What is the ev­i­dence that sup­ports this thought that I have?”

I’m sorry, to back up…Not­ing the thoughts that are un­der­ly­ing the emo­tions that you’re feel­ing. So I was talk­ing about these im­plicit as­sump­tions about the world that your emo­tions are based on. It gets you to make those ex­plicit, and then ques­tion whether you ac­tu­ally have good ev­i­dence for be­liev­ing them.

This sort of pro­cess, plus lots of other ex­er­cises that psy­chother­a­pists do with their pa­tients and that peo­ple can even do at home by them­selves from a book, is by far the most em­piri­cally val­i­dated and well-tested form of psy­chother­apy. In fact, some would say it’s the only one that’s re­ally sup­ported by ev­i­dence so far.

Even if you’re not do­ing an offi­cial cam­paign of cog­ni­tive ther­apy to change your emo­tions, and by way of your emo­tions, your de­sires and goals, there’s still plenty of in­for­mal ways to ra­tio­nally change your emo­tions and make your­self bet­ter off.

For ex­am­ple, in the short term you could rec­og­nize when you feel that first spark of anger, and de­cide whether or not you want to fuel that anger by dwelling on what that per­son has done in the past that an­gered you, and imag­in­ing what they were think­ing about you at that mo­ment. Or you could de­cide to try to dampen the flames of your bur­geon­ing anger by in­stead think­ing about times that you’ve screwed up in the past, or think­ing about things that that per­son has done for you that were ac­tu­ally kind. So you do have ac­tu­ally a lot of con­scious con­trol, if you choose to take it, over which di­rec­tion your emo­tions push you in, than I think a lot of peo­ple are used to re­al­iz­ing.

[41:00]

In the longer term, you can even change what you value. It’s hard, and it does tend to take a while, but let’s say that you wish you were a more com­pas­sion­ate per­son. And you have con­flict­ing de­sires. One of your de­sires is to lie on the couch ev­ery night, and an­other one of your de­sires is to be the kind of per­son that other peo­ple will look up to. So you want to bring those con­flict­ing de­sires into har­mony with each other.

You can ac­tu­ally, to some ex­tent, make your­self more com­pas­sion­ate over time if that’s what you want to do. You can choose to ex­pose your­self to ma­te­rial, to images, and to de­scrip­tions of suffer­ing re­fuges, and you can con­sciously de­cide to imag­ine that it’s you in that situ­a­tion. Or that it’s your friends and fam­ily and you can do the thought ex­per­i­ment of ask­ing your­self what the differ­ence is be­tween these peo­ple suffer­ing, and you or your friends and fam­ily suffer­ing, and you can bring the emo­tions about by think­ing of the situ­a­tions ra­tio­nally.

This is es­sen­tially my rough fi­nal model of the re­la­tion­ships be­tween emo­tions and ra­tio­nal­ity in ideal de­ci­sion-mak­ing, as dis­t­in­guished from the Straw Vul­can model of the re­la­tion­ships be­tween emo­tions and ra­tio­nal­ity.

Here’s Straw Vul­can Ra­tion­al­ity Prin­ci­ple num­ber 5: Be­ing ra­tio­nal means valu­ing only quan­tifi­able things—like money, effi­ciency, or pro­duc­tivity.

[video]

[43:45]

McCoy: There’s just one thing, Mr. Spock. You can’t tell me when you first saw Jim al­ive, you weren’t on the verge of giv­ing us an emo­tional scene that brought the house down.

Spock: Merely my quite log­i­cal re­lief that Star­fleet had not lost a highly profi­cient cap­tain.

Kirk: Yes. I un­der­stand.

Spock: Thank you, Cap­tain.

McCoy: Of course, Mr. Spock. You’re re­ac­tion was quite log­i­cal.

Spock: Thank you, Doc­tor. (starts walk­ing away)

McCoy: In a pig’s eye.

Kirk: Come on, Spock. Let’s go mind the store.

[end video]

Ju­lia: So it’s not ac­cept­able, in Straw Vul­can Ra­tion­al­ity world, to feel hap­piness be­cause your best friend and Cap­tain is ac­tu­ally al­ive in­stead of dead. But it is ac­cept­able to feel re­lief, I sup­pose Spock did say, be­cause a profi­cient worker in your Star­fleet is al­ive, and can there­fore do more profi­cient work. That kind of thing is ra­tio­nally jus­tifi­able from the Straw Vul­can model of how ra­tio­nal­ity works.

Here’s an­other ex­am­ple: This is from an epi­sode called “This Side of Par­adise”, and in this epi­sode they’re vis­it­ing a planet where there are these flow­ers that re­lease spores, where if you in­hale them, you sud­denly get re­ally emo­tional. This woman, who has a crush on Spock, makes sure to po­si­tion him in front of one of the flow­ers when it opens and re­leases its spores. So all of a sud­den, he’s ac­tu­ally ro­man­tic and emo­tional. This is Kirk and the crew try­ing to get in touch with him while he’s out frolick­ing in the meadow with his lady love.

[45:30]

[video]

Kirk: (on com­mu­ni­ca­tion de­vice) Spock?

Spock: That one looks like a dragon. See the tail, and dor­sal spines.

Lady: I’ve never seen a dragon.

Spock: I have. On Beren­garias 7. But I’ve never stopped to look at clouds be­fore, or rain­bows. Do you know I can tell you ex­actly why one ap­pears in the sky. But con­sid­er­ing its beauty has always been out of the ques­tion.

[end video]

Ju­lia: So this model of ra­tio­nal­ity, in which the only things to value are quan­tifi­able things that don’t have to do with love or joy or beauty… I’ve been try­ing to figure out where this came from. One of my the­o­ries is that is comes from the way economists talk about ra­tio­nal­ity, where a ra­tio­nal ac­tor max­i­mizes his ex­pected mon­e­tary gain.

This is a con­ve­nient proxy, be­cause in a lot of ways money can proxy for hap­piness, be­cause what­ever it is that you want that you think is go­ing to make you happy you can of­ten buy with money, or things that are mak­ing you un­happy you can of­ten get rid of with money. It’s ob­vi­ously not a perfect model, but it’s good enough in a way that economists use money as a proxy for util­ity or hap­piness some­times, when they’re mod­el­ing how a ra­tio­nal ac­tor should make de­ci­sions.

But no economists in their right mind would tell you that money is in­her­ently valuable or use­ful. You can’t do any­thing with it. It can only be use­ful, valuable, worth car­ing about for what it can do for you.

All of these things in the Straw Vul­can method of ra­tio­nal­ity, which they con­sider ac­cept­able things to value, make no sense as val­ues in and of them­selves. It makes no sense to value pro­duc­tivity in and of it­self if you are not al­lowed to get happy over some­one you care about be­ing al­ive in­stead of dead. It doesn’t make sense at all to care about pro­duc­tivity or effi­ciency. The only way that could pos­si­bly be use­ful to you is in get­ting you more out­comes like the one where your best friend is al­ive in­stead of dead. So if you don’t value that, then don’t bother.

This is one more ex­am­ple from real life, if you can con­sider an in­ter­net mes­sage board “real life”. I found a dis­cus­sion where peo­ple were ar­gu­ing whether or not it was pos­si­ble to be too ra­tio­nal, and one of them said, “Well, sure it is!”, and one of them said, “Well, give me an ex­am­ple,” and he said “Well fine, I will.”

His ex­am­ple was two guys driv­ing in a car, and one of them says “Oh, well we need to get from here to there, so let’s take this road.” And the sec­ond guys says, “No, but that road has all this beau­tiful scenery, and it has this his­tor­i­cal site which is re­ally ex­cit­ing, and it might have a UFO on it.” And the first guy says, “No we have to take this first road be­cause it is .2 miles shorter, and we will save .015 liters of gas.”

And that was this mes­sage board com­menter’s model of how a ra­tio­nal ac­tor would think about things. So I don’t ac­tu­ally know if that kind of think­ing is what cre­ated Straw Vul­cans in TV and the movies, or whether Straw Vul­cans is what cre­ated peo­ple’s think­ing about what ra­tio­nal­ity is, or it’s prob­a­bly some com­bi­na­tion of the two. But it’s definitely a wide­spread con­cep­tion of what ra­tio­nal­ity con­sists of.

I my­self had a con­ver­sa­tion with a friend of mine a cou­ple years back when I was first start­ing to get ex­cited about ra­tio­nal­ity, and read about it, and study it. She said, “Oh it’s in­ter­est­ing that you’re in­ter­ested in this be­cause I’m try­ing to be less ra­tio­nal.”

It took me a while to get to the bot­tom of what she ac­tu­ally meant by that. But it turns out what she meant was that she was try­ing to en­joy life more. And she thought that ra­tio­nal­ity was about valu­ing money, and get­ting a good job, and be­ing pro­duc­tive and effi­cient. And she just wanted to re­lax and en­joy sun­sets, and take it easy. To ex­press that she said that she wanted to be less ra­tio­nal.

Here’s one more clip of Spock and Kirk af­ter they left that planet. Ba­si­cally, sorry for the spoiler, guys, but Kirk finds a way to cure Spock of his newfound emo­tions, and bring him back on board as the emo­tion­less Vul­can he always was.

[50:00]

[video]

Kirk: We haven’t heard much from you about Om­nicron Seti 3, Mr. Spock.

Spock: I have lit­tle to say about it, Cap­tain. Ex­cept that, for the first time in my life, I was happy.

[end video]

Ju­lia: I know, awww. So I want to end on this be­cause I think the main take­away from all of this that I want to leave you guys with is:

If you think you’re act­ing ra­tio­nally, but you con­sis­tently keep get­ting the wrong an­swer, and you con­sis­tently keep end­ing up worse off than you could be….Then the con­clu­sion that you should draw from that is not that ra­tio­nal­ity is bad. It’s that you’re be­ing bad at ra­tio­nal­ity. In other words, you’re do­ing it wrong! Thank you!

[ap­plause]