# Elevator pitches/​responses for rationality /​ AI

I’m try­ing to de­velop a large set of ele­va­tor pitches /​ ele­va­tor re­sponses for the two ma­jor top­ics of LW: ra­tio­nal­ity and AI.

An ele­va­tor pitch lasts 20-60 sec­onds, and is not nec­es­sar­ily prompted by any­thing, or at most is prompted by some­thing very vague like “So, I heard you talk­ing about ‘ra­tio­nal­ity’. What’s that about?”

An ele­va­tor re­sponse is a 20-60 sec­ond, highly op­ti­mized re­sponse to a com­monly heard sen­tence or idea, for ex­am­ple, “Science doesn’t know ev­ery­thing.”

Ex­am­ples (but I hope you can im­prove upon them):

“So, I hear you care about ra­tio­nal­ity. What’s that about?”

Well, we all have be­liefs about the world, and we use those be­liefs to make de­ci­sions that we think will bring us the most of what we want. What most peo­ple don’t re­al­ize is that there is a math­e­mat­i­cally op­ti­mal way to up­date your be­liefs in re­sponse to ev­i­dence, and a math­e­mat­i­cally op­ti­mal way to figure out which de­ci­sion is most likely to bring you the most of what you want, and these meth­ods are defined by prob­a­bil­ity the­ory and de­ci­sion the­ory. More­over, cog­ni­tive sci­ence has dis­cov­ered a long list of pre­dictable mis­takes our brains make when form­ing be­liefs and mak­ing de­ci­sions, and there are par­tic­u­lar things we can do to im­prove our be­liefs and de­ci­sions. [This is the ab­stract ver­sion; prob­a­bly bet­ter to open with a con­crete and vivid ex­am­ple.]

“Science doesn’t know ev­ery­thing.”

As the co­me­dian Dara O’Bri­ain once said, sci­ence knows it doesn’t know ev­ery­thing, or else it’d stop. But just be­cause sci­ence doesn’t know ev­ery­thing doesn’t mean you can use what­ever the­ory most ap­peals to you. Any­body can do that, and use what­ever crazy the­ory they want.

“But you can’t ex­pect peo­ple to act ra­tio­nally. We are emo­tional crea­tures.”

But of course. Ex­pect­ing peo­ple to be ra­tio­nal is ir­ra­tional. If you ex­pect peo­ple to usu­ally be ra­tio­nal, you’re ig­nor­ing an enor­mous amount of ev­i­dence about how hu­mans work.

“But some­times you can’t wait un­til you have all the in­for­ma­tion you need. Some­times you need to act right away.”

But of course. You have to weigh the cost of new in­for­ma­tion with the ex­pected value of that new in­for­ma­tion. Some­times it’s best to just act on the best of what you know right now.

“But we have to use in­tu­ition some­times. And some­times, my in­tu­itions are pretty good!”

But of course. We even have lots of data on which situ­a­tions are con­ducive to in­tu­itive judg­ment, and which ones are not. And some­times, it’s ra­tio­nal to use your in­tu­ition be­cause it’s the best you’ve got and you don’t have time to write out a bunch of prob­a­bil­ity calcu­la­tions.

“But I’m not sure an AI can ever be con­scious.”

That won’t keep it from be­ing “in­tel­li­gent” in the sense of be­ing very good at op­ti­miz­ing the world ac­cord­ing to its prefer­ences. A chess com­puter is great at op­ti­miz­ing the chess board ac­cord­ing to its prefer­ences, and it doesn’t need to be con­scious to do so.

Please post your own ele­va­tor pitches and re­sponses in the com­ments, and vote for your fa­vorites!

• First, a gen­eral com­ment on your ver­sions, sorry: you tend to use big words, sci­en­tific jar­gon, and too few ex­am­ples.

Com­pare your pitch with the fol­low­ing (in­ten­tion­ally over­sim­plified) ver­sion:

“So, I hear you care about ra­tio­nal­ity. What’s that about?”

It’s about figur­ing out what you re­ally want and get­ting it. If you are at a game, and it’s re­ally bor­ing, should you walk out and waste what you paid for the tick­ets? If you ap­ply for a po­si­tion and don’t get it, does it help to de­cide that you didn’t re­ally want it, any­way? If you are look­ing to buy a new car, what in­for­ma­tion should you take se­ri­ously? There are many pit­falls on the road to mak­ing a good de­ci­sion; ra­tio­nal­ity is a sys­tem­atic study of the ways to make bet­ter choices in life. In­clud­ing figur­ing out what “bet­ter” re­ally means for you.

• First, a gen­eral com­ment on your ver­sions, sorry: you tend to use big words, sci­en­tific jar­gon, and too few ex­am­ples.

And do­ing that is go­ing to in­stantly turn peo­ple off.

• It’s about figur­ing out what you re­ally want and get­ting it. If you are at a game, and it’s re­ally bor­ing, should you walk out and waste what you paid for the tick­ets? If you ap­ply for a po­si­tion and don’t get it, does it help to de­cide that you didn’t re­ally want it, any­way? If you are look­ing to buy a new car, what in­for­ma­tion should you take se­ri­ously? There are many pit­falls on the road to mak­ing a good de­ci­sion; ra­tio­nal­ity is a sys­tem­atic study of the ways to make bet­ter choices in life. In­clud­ing figur­ing out what “bet­ter” re­ally means for you.

Makes it sound great, but what are the real world benefits? I’ve been ra­tio­nal for years and it hasn’t done any­thing for me.

• 15 com­ments and −120 karma? Okay, at this point I may be­gin im­mune re­sponse against trol­ling (delete fur­ther com­ments, pos­si­bly past com­ments, as and when I get around to see­ing that they were made).

I also re­mind ev­ery­one: Please do not re­spond at length to trolls, at­ten­tion stim­u­lates their re­ward cen­ters.

• I’m not so sure he’s a troll. He very well might be, but at least he made this com­ment which is at 4 karma right now. His more re­cent com­ments seem bet­ter than his pre­vi­ous ones, too. p(troll) seems pretty high, but not so high that I would sup­port a ban, com­ment dele­tions, etc. at this point.

• Most of his com­ments are es­sen­tially say­ing “you are wrong”. Once he was right in say­ing that, many times he was wrong. He prob­a­bly knows a lot of facts about many top­ics, and he ex­presses with very high cer­tainty; un­for­tu­nately the qual­ity of his com­ments does not match this cer­tainty, and he seems very im­mune to feed­back. Low karma just proves he is right.

He is very nega­tive to­wards oth­ers. Al­most all his com­ments con­tain some­thing like: “Your work is wrong.” “I never said any­thing like this” “I never flamed any­one.” “spel­led wrong” “I have no such delu­sions.” “it hasn’t done any­thing for me.” “it’s definitely go­ing to do more harm than good.” “I already ex­plained why it’s not pos­si­ble.” “There is prac­ti­cally no chance” “It’s a mis­con­cep­tion” “This idea is based on a whole range of con­fu­sions and mi­s­un­der­stand­ings” “just an­other ex­am­ple of peo­ple not un­der­stand­ing” It’s like his only point in dis­cus­sions is to show that ev­ery­one else is wrong, but it’s of­ten him who is wrong. Did he make some use­ful con­tri­bu­tion? I don’t see any.

And then the—“You are try­ing to sub­mit too fast. try again in %i min­utes.” and “You do not have enough karma to down­vote right now. You need 1 more point.”—just make me want to scream. (Though the fact that he does not have enough karma to down­vote makes me happy. I guess he was go­ing to down­vote those who dis­agree with him. I am happy that LW karma sys­tem does not al­low him to make dozen sock­pup­pet ac­counts and start a down­vot­ing war.)

Maybe the guy is not hav­ing fun, maybe that’s just what he hon­estly is… but any­way his com­ments seem like op­ti­mized to cre­ate men­tal suffer­ing in oth­ers, cer­tainly in me. I have left web­sites where peo­ple like this be­came fre­quent. If this kind of be­hav­ior be­comes tol­er­ated on LW, I will ei­ther write some GreaseMon­key plu­gin that will re­move all his com­ments from the page, or I will sim­ply stop read­ing LW. In the­ory I am read­ing this site for in­for­ma­tion, not for pos­i­tive emo­tion, but I am just a hu­man… if this site will give me nega­tive emo­tion too of­ten, I will stop read­ing it.

I tried to give him the benefit of doubt, and an­swered his com­ment se­ri­ously, but now I feel it was to­tally not worth do­ing. This is my worst ex­pe­rience on LW so far. Though this mostly means that I did not have bad ex­pe­riences on LW so far. :) But I pre­fer it to stay this way.

• I tend to agree with you. I think I just have a higher thresh­old for ban­ning. As such, I would like to see him ac­tively ig­nore our sug­ges­tions be­fore en­tirely dis­miss­ing him, which is not sure is some­thing he’s done yet.

• I’m not so sure he’s a troll. He very well might be, but at least he made this com­ment which is at 4 karma right now.

This is how trol­ling works.

• Well he hasn’t com­mented re­cently, so I’m guess­ing he ei­ther took our ad­vice and made a new ac­count, or just left the site, nei­ther of which I would at­tribute to troll be­havi­our. (Or Eliezer is delet­ing his posts as promised, which would, ob­vi­ously, weaken that hy­poth­e­sis.)

• Less Wrong isn’t some kind of hu­man right that we need to go be­yond rea­son­able doubt to with­draw from some­one; it’s an on­line com­mu­nity run by an en­light­ened dic­ta­tor, and if you want to keep your well kept gar­den, you have to ac­cept some col­lat­eral dam­age.

• I am ex­tremely wary of this kind of think­ing. Partly be­cause us­ing power is a slip­pery slope to abus­ing power, and each time you use the ban­ham­mer on a maybe-troll it gets a lit­tle bit eas­ier to use it on the next maybe-troll.

Not just be­cause of that, but also be­cause when other peo­ple come to a com­mu­nity full of self-pur­ported ra­tio­nal­ists, and they see some­one who does not ob­vi­ously and im­me­di­ately pat­tern match as a troll re­ceiv­ing the ban­ham­mer for pre­sent­ing com­mu­nity-dis­ap­proved opinions in what seems su­perfi­cially to be an ad­e­quately calm and rea­son­able man­ner, that sets off the ‘cult’ alarms. It makes us look in­tol­er­ant and ex­clu­sion­ary, even if we aren’t.

It’s fine for places like the SA fo­rums to throw the ban­ham­mer around with reck­less aban­don, be­cause they ex­ist only for fun. But we have higher goals. We have to con­sider not just keep­ing our gar­den tidy, but mak­ing sure we don’t look like overzeal­ous pruners to any­body who has a po­ten­tially nice set of aza­leas to con­tribute.

• Slip­per slopes work in both di­rec­tions. Each time you don’t strike down in­jus­tice, it be­comes a bit eas­ier to walk by the next time. I’d sooner have Marginal Value > Marginal Cost than Marginal Value < Marginal Cost and a lower Aver­age Value.

Bad im­pres­sions work in both di­rec­tions. When other peo­ple come to a com­mu­nity full of self-pur­ported ra­tio­nal­ists, and they see some­one pre­sent­ing stupid, low-sta­tus, in­cendary com­ments and be­ing treated as wor­thy of re­spect, it makes LW look stupid, low-sta­tus and in­cendary be­cause of the Rep­re­sen­ta­tive­ness Heuris­tic.

Ob­veously there is a con­tinuum be­tween an­ar­chy and ban­ning ev­ery­thing, and both ex­tremes are lo­cal min­ima. The is­sue is to judge the lo­cal gradient

• Upvoted for valid point. I agree, but I think there is enough of a differ­ence be­tween ‘be­ing treated as wor­thy of re­spect’ and ‘not be­ing banned’ that we can prob­a­bly ride in the mid­dle ground com­fortably with­out any sig­nifi­cant image dam­age.

On con­sid­er­a­tion, though… maybe I’m prej­u­diced against ban­ning be­cause of the sense of fi­nal­ity of it. I guess it’s not hard to make a new ac­count.

I’m still op­posed to delet­ing past com­ments though, be­cause deleted com­ments make a mess of the his­tory.

• I won­der if down­votes have gone from a pun­ish­ment to a re­ward at this point.

• I say just ban him.

• I hope you’ll treat me fairly as a per­son and ac­tu­ally read and try to un­der­stand my com­ments in­stead of jump­ing to con­clu­sions based on my “score”.

• Your best way to be taken se­ri­ously would be just to cre­ate a new ac­count with­out mak­ing any refer­ence to this one, and, well, not act like a troll.

• Huh. Come to think of it, on the In­ter­net there IS a sec­ond chance to make a first im­pres­sion. (a good ar­gu­ment to always us­ing han­dles). Noted.

• Are you en­joy­ing wast­ing your time on this web­site?

You have 15 com­ments and a grand to­tal of −120 karma. That is a strong in­di­ca­tion that you are do­ing some­thing wrong. To save you some time: the stan­dard re­sponse is “I’m be­ing cen­sored! You’re an Eliezer-cult! All these down­votes are just be­cause you’re scared of the Truth!”.

Please don’t use it, be­cause it is not true: e.g. two links you’ve already seen, peo­ple call Eliezer out on mis­takes, naunced re­sponses to “Yay for Eliezer/​ra­tio­nal­ity/​SI!”-type posts. Part of the rea­son I like LW is pre­cisely be­cause peo­ple do dis­agree, but there are al­most never flame wars: the dis­agree­ment means that peo­ple ac­tu­ally think about what they be­lieve and even change their minds!

What you are do­ing is not fit­ting into the com­mu­nity norms of dis­cus­sion, like re­search and link­ing/​refer­ring to spe­cific sources (any­one can say “I’ve done re­search!”, but that doesn’t mean that you have). (I’ll pre-empt an­other com­mon whinge: yes, in most cases, Wikipe­dia is an ac­cep­ti­ble refer­ence to use on LW).

The par­ent com­ment might not be par­tic­u­larly bad; but your his­tory (and your user­name) puts you very close to “troll”, and that makes the par­ent com­ment look like a pat­tern-matched re­sponse (rather than a gen­uine ques­tion) which is the rea­son I down­voted.

• To save you some time: the stan­dard re­sponse is “I’m be­ing cen­sored! You’re an Eliezer-cult! All these down­votes are just be­cause you’re scared of the Truth!”.

I never said any­thing like this and I never in­voked Eleizer. I don’t un­der­stand why you’re tel­ling me off for some­thing I didn’t do. Look at my post his­tory if you don’t trust me.

What you are do­ing is not fit­ting into the com­mu­nity norms of dis­cus­sion, like re­search and link­ing/​refer­ring to spe­cific sources

It only makes sense to do so when mak­ing a claim. Yet peo­ple on this site have re­fused to back up their own claims with cita­tions be­cause ap­par­ently “I’m not worth both­er­ing with”.

but there are al­most never flame wars

I never flamed any­one. The only guy who is call­ing peo­ple names “like troll for ex­am­ple” is you (well now that you’ve done it oth­ers are fol­low­ing your lead too, well done..).

Are you en­joy­ing wast­ing your time on this web­site?

Not re­ally, I didn’t ex­pect to get re­jected so harsly. I’ve read all the se­quences twice and been ra­tio­nal for years so I don’t know what the prob­lem is. What’s the point of all this meta dis­cus­sion, why is ev­ery­one try­ing to drag me into these metadis­cus­sions and brand me as a troll af­ter I passed 100 down­votes. We should get back onto the ac­tual topic.

You are try­ing to sub­mit too fast. try again in 6 min­utes.

• One of the prob­lems is that you say things like “I’ve been ra­tio­nal for years”. Sorry. No, you haven’t. EY hasn’t been ra­tio­nal for years. You may have been an as­piring ra­tio­nal­ist, but that’s a far cry from ac­tu­ally be­ing ra­tio­nal. When you say things like that it is ex­tremely off-putting be­cause it sounds self-con­grat­u­la­tory. That’s some­thing that this com­mu­nity strug­gles with a lot, and we typ­i­cally heav­ily down­vote things that are that way be­cause they send very bad sig­nals about what this web­site is. Beyond that, when it’s said by some­one with the user­name “911truther”, it im­plies an el­e­ment of “You’re not ra­tio­nal un­less you’re a truther too”, which mean it or not, is how it comes across.

Se­condly, and this re­lates, your user­name. It’s in­her­ently poli­ti­cal, which brings up all of our op­po­si­tion to poli­tics ev­ery time you make a post. That’s not a good thing, and it will be very difficult for any­one on this site to take you se­ri­ously. If two differ­ent peo­ple wrote two ar­ti­cles that were of ex­actly equal cal­iber, and one was named BobSmith, and the other was named Obama2012, I would an­ti­ci­pate at least 2-3 times the up­vot­ing on the former and 2-3 times the down­vot­ing on the lat­ter. And 9/​11 is so much more of a po­lariz­ing is­sue. The vast, vast ma­jor­ity of peo­ple here dis­agree with you. But roland, de­spite be­ing wildly down­voted ev­ery time he brings up 9/​11, ac­tu­ally man­ages pos­i­tive karma, be­cause it’s not in­her­ently brought up ev­ery time he posts. I can not recom­mend strongly enough that you delete your ac­count and cre­ate a new user­name if you wish to con­tinue on this site. If you’re a 911 truther, I would not sug­gest ly­ing about that, but choos­ing that as the phrase by which you iden­tify your­self is not a very effec­tive strat­egy for be­ing taken se­ri­ously on this site.

Thirdly, the great grand­par­ent to this isn’t a ter­rible com­ment. I agree with you there. I likely would have up­voted it had it been made by a differ­ent user­name, since I didn’t think it de­served that level of down­vot­ing (but not be­cause I thought it was par­tic­u­larly won­der­ful in and of it­self).

• Yet peo­ple on this site have re­fused to back up their own claims with cita­tions be­cause ap­par­ently “I’m not worth both­er­ing with”.

I found this claim difficult to be­lieve, so I looked it up. For the record:

911truther: Freez­ing things makes wa­ter ex­pand and burst the frag­ile parts of your brain.

gw­ern: Freez­ing ca­nard: proof you have not read the cry­on­ics liter­a­ture. In­stant down­vote.

911truther: If “the cry­on­ics liter­a­ture” (pre­sum­ably ex­plain­ing why freez­ing does not de­stroy the brain) ac­tu­ally ex­ists why don’t you link to it?

gw­ern: Be­cause spend­ing the time to look up refer­ences solid enough that they can­not be glibly re­jected in­di­cates that I think some­one is worth ed­u­cat­ing, that I can ed­u­cate them, or it’s a sign of re­spect.

None of those three are true. So if you think you are right, you are free to bring your own refer­ences to the table.

• I do wish we could dis­cour­age the at­ti­tude dis­played here by gw­ern. It’s pure ego to re­spond in this way to some­one you deem a “troll”. It cer­tainly won’t change their mind, and it will only spur them to com­ment more. Either ig­nore them com­pletely af­ter down­vot­ing, or be po­lite in your re­ply. One might jus­tify these posts as im­por­tant to make sure that 911truther knows why he’s be­ing down­voted, but the ag­gres­sion in them is en­tirely counter-pro­duc­tive and, frankly, is quite rude.

For the record, I do think peo­ple are a lit­tle over-ea­ger to ac­cuse some­one of be­ing a “troll” (I think it is much more prob­a­ble that 911truther is sim­ply ig­no­rant) al­though I think mod­er­a­tion is war­ranted in this case.

• Was this be­fore or af­ter the other links in other con­ver­sa­tions?

• I never said any­thing like this and I never in­voked Eleizer. I don’t un­der­stand why you’re tel­ling me off for some­thing I didn’t do. Look at my post his­tory if you don’t trust me.

I know you didn’t in­voke Eliezer, but that is a com­mon state­ment by peo­ple who find them­selves down­voted a lot, so I was pre-empt­ing it (if you were not go­ing to do that, I apol­o­gise and that sen­tence should be con­sid­ered re­moved from my quote, how­ever the rest still stands). The only rea­son I said that, was be­cause I looked at your post his­tory and saw this one:

[...] If you look at my user page (http://​​less­wrong.com/​​user/​​911truther) it’s blatantly ob­vi­ous that some­one is sys­tem­at­i­cally down­vot­ing ev­ery­thing I post mul­ti­ple times. I don’t claim to be per­se­cuted but clearly there is an at­tempt to cen­sor me. Frankly it just proves that I’m right, if I was wrong peo­ple could eas­ily dis­prove me.

For the rest:

• Peo­ple have been pro­vid­ing links and cita­tions to back up their claims. (Sev­eral of the replies in this thread)

• I wasn’t im­ply­ing that you flamed any­one, just that dis­sent is part of this web­site, and it is treated with re­spect.

• Dis­miss­ing ac­cu­sa­tion of “troll” with uncheck­able and ir­rele­vant claims of ra­tio­nal­ity is not the right way to do it.

• Ra­tional com­pared to who?

• What most peo­ple don’t re­al­ize is that there is a math­e­mat­i­cally op­ti­mal way to up­date your be­liefs in re­sponse to ev­i­dence, and a math­e­mat­i­cally op­ti­mal way to figure out which de­ci­sion is most likely to bring you the most of what you want

You’ve said some­thing similar in a re­cent video in­ter­view posted on LW, and it cringed me then, as it does now. We don’t know of such op­ti­mal ways in the gen­er­al­ity the con­text of your state­ment sug­gests, and any such op­ti­mal meth­ods would be im­prac­ti­cal even if known, which again is in con­flict with the con­text. Similarly, turn­ing to the in­ter­view, SingInst’s stan­dard po­si­tions on many is­sues don’t fol­low from for­mal con­sid­er­a­tions such as logic and de­ci­sion the­ory, there is no for­mal the­ory that rep­re­sents them to any sig­nifi­cant ex­tent. If there is strength to the main ar­gu­ments that sup­port these po­si­tions, it doesn’t cur­rently take that form.

• Fair enough. My state­ment makes it sounds like we know more than we do. Do you like how I said it here, when I had more words to use?

• It made me cringe as well but more be­cause it will make peo­ple hug the op­po­site wall of the prover­bial ele­va­tor, not be­cause such meth­ods are con­clu­sively shown as im­prac­ti­cal—http://​​de­ci­sion.stan­ford.edu/​​.

• I think Ian Pol­lock more effec­tively got at what Luke is try­ing to com­mu­ni­cate.

• Pos­si­ble ab­solute shite ahead (I went the folksy route):

“So, I hear you care about ra­tio­nal­ity. What’s that about?”

It’s about be­ing like Brad Pitt in Money­ball. (Oh, you didn’t see it? Here’s a brief spoiler free syn­op­sis). It’s the art see­ing how oth­ers, and even your­self, are failing and then do­ing bet­ter.

“Science doesn’t know ev­ery­thing.”

Oh, yeah, I com­pletely agree. But, it does know a hel­luva lot. It put us on the moon, gave us amaz­ing tech­nol­ogy like this [pull out your cel­l­phone], and there’s ev­ery rea­son to think it’s go­ing to blow our minds in the fu­ture.

“But you can’t ex­pect peo­ple to act ra­tio­nally. We are emo­tional crea­tures.”

Yeah, no that’s true. We’ve re­cently seen all kinds of bad de­ci­sions—hous­ing crisis and so on. But that’s all the more rea­son to try and get peo­ple to act more ra­tio­nally.

“But some­times you can’t wait un­til you have all the in­for­ma­tion you need. Some­times you need to act right away.”

Yeah, true… true. Still, we can pre­pare in ad­vance for those situ­a­tions. For ex­am­ple, you might have rea­son to be­lieve that you’re go­ing to start a new pro­ject at your job. That’s go­ing to in­volve a lot of de­ci­sions and any poor de­ci­sion at such an early stage can mag­nify as times goes by. That’s why you pre­pare the best you can for those quick de­ci­sions that you know you’ll be mak­ing.

“But we have to use in­tu­ition some­times. And some­times, my in­tu­itions are pretty good!”

Yeah, in­tu­itions are just de­ci­sions based on ex­pe­rience. I re­mem­ber read­ing that chess mas­ters, y’know like Billy Fisher or Kas­parov, don’t even de­liber­ate on their de­ci­sions, they just know; whereas, chess ex­perts, a level be­low mas­ter, do de­liber­ate. But to get to that level of mas­tery, you need tens of thou­sands of hours of prac­tice, man. Only a few of us are lucky enough to have that kind of ex­pe­rience in even a very nar­row area. If you’re a some­thing like an in­ter­me­di­ate chess player in an area with a bunch of skil­led chess play­ers, your in­tu­ition is go­ing to suck.

“But I’m not sure an AI can ever be con­scious.”

Maybe not, but that’s not re­ally im­por­tant. Did you hear about Wat­son? That ma­chine that beat those Jeop­ardy play­ers? They’re say­ing Wat­son could act as a med­i­cal di­ag­nos­ti­cian like House and do a bet­ter job at it. Not only that, but it’d be eas­ier than play­ing jeop­ardy… isn’t that crazy?

• Oh, yeah, I com­pletely agree. But, it does know a hel­luva lot. It put us on the moon, gave us amaz­ing tech­nol­ogy like this [pull out your cel­l­phone], and there’s ev­ery rea­son to think it’s go­ing to blow our minds in the fu­ture.

I like the oth­ers, but I think the prob­lem with this one is that it doesn’t provide them with any rea­son why they shouldn’t just fill the gaps in what­ever sci­ence knows now with what­ever the hell they want.

• First, thanks to luke­prog for post­ing this dis­cus­sion post. The Ohio Less Wrong group has been dis­cussing ele­va­tor pitches, and the com­ments here are sure to help us!

I of­ten end up pitch­ing LW stuff to peo­ple who are athe­ists, but not ra­tio­nal­ists. I think this type of per­son is a great po­ten­tial “re­cruit”, be­cause they WANT a com­mu­nity, but of­ten find the athe­is­tic com­mu­nity a lit­tle too “pat­ting our­selves on the back”-ish (as do I). My gen­eral pitch is that Less Wrong is like the next step: “Yeah, we’re all (mainly) athe­ists, but now what??”

Here’s an ex­am­ple from a re­cent face­book com­ment thread:

Other per­son- What ex­actly do athe­ist groups do? I went to a cou­ple meet­ings of [ Freethought Group ] here at [ Lo­cal big uni­ver­sity], but it turned out to be ex­actly like Sun­day school but ex­cept for read­ing Bible verses, ev­ery­one talked about why re­li­gion was ter­rible. It’s not ex­actly what I’m all about.

Me- Yeah, I hate “Rah rah, Athe­ism!” stuff too. I know [Per­son A ] and [Per­son B ] from less­wrong.com . I like the site be­cause it’s like...”Yeah, we’ve all got the athe­ism stuff figured out. Let’s move on and see where we can go from there.”

Then I point them to Meth­ods of Ra­tion­al­ity, and hope­fully now to our mee­tups.

• Com­ing up with ele­va­tor pitches/​re­sponses strikes me as a great ac­tivity to do at LW mee­tups.

• If there is in­ter­est in some dis­cus­sion logs to analyse, I’m hav­ing a lengthy FB thread with a fairly in­tel­li­gent the­ist I knew from rab­bini­cal sem­i­nary. I don’t think his ar­gu­ments are par­tic­u­larly good, and I’m not great at ar­gu­ing ei­ther, though I hope my con­tent is a bit more con­vinc­ing de­spite lack of style. I do not ex­pect to change his mind—he holds a rab­bini­cal po­si­tion and chances of him chang­ing his mind are near zero, but there are some ob­servers I care about and this is an ex­er­cise in ra­tio­nal­ity for me. I can anonymize and post if peo­ple find this kind of thing in­ter­est­ing, I would cer­tainly ap­pre­ci­ate some feed­back.

• Well, I would find it in­ter­est­ing, but as a point of or­der: maybe you should let him know you’re do­ing this (even anonymizedly) so he can get help from a gang of his friends too?

• I have no in­ten­tion to have this turn into a pub­lic de­bate out of a Face­book thread. This is a chance to im­prove my ra­tio­nal­ity and ar­gu­men­ta­tion skills.

• Yes… I took “there are some ob­servers I care about” plus “I would ap­pre­ci­ate some feed­back” to mean ‘I’d like some de­bate ad­vice (which I will be ap­ply­ing)’. If that’s not get­ting help from a gang of your friends, I don’t know what is.

• You’re cor­rect, it’s a side benefit, but hav­ing a thread evolve into some kind of pub­lic de­bate looks silly. If pub­lic de­bate on such is­sues is de­sired there are or­der of mag­ni­tude bet­ter ways of do­ing it than this.

• I don’t think pedan­ter­rific is plan­ning to have a bunch of LWers start com­ment­ing on the thread in sup­port of athe­ism. I think he’s ex­pect­ing a bunch of LWers to give you ad­vice in this thread, which you will then use in your own posts. And he thinks the rabbi should be given an op­por­tu­nity to ask his own com­mu­nity for similar ad­vice. To use a box­ing metaphor, no­body else is go­ing to start fight­ing, but you’re go­ing to have more coaches and your op­po­nent should too.

• I got that, but hav­ing to tell him that there are a bunch of peo­ple helping, bring your friends, seems awk­ward in the con­text. I’d rather not have the help and just let peo­ple view the log as a post mortem, for im­prov­ing my ra­tio­nal­ity. Another part of it is the fact that I’m ac­tu­ally do­ing ok in the ar­gu­ment (I think) and “call­ing for help” would look like/​could be spun as a weak­ness.

• Okay then! That makes sense. Also, I sup­port post­ing the log when the ar­gu­ment is done; I’d en­joy read­ing it and would be happy to com­ment.

• The third, com­pro­mise, op­tion would be, if I end up us­ing a sug­ges­tion from LW, to say “(I got this ar­gu­ment from talk­ing it over with a a friend)”, though I’m not sure if that goes far enough to satisfy stan­dards of a fair fight peo­ple want to see.

• I too am a mem­ber of the Ohio Less Wrong group. I was quite sur­prised to see this topic come up in Dis­cus­sion, but I ap­prove whole­heart­edly.

My thoughts on the sub­ject are lean­ing heav­ily to­wards the cur­rent equiv­a­lent of an ‘ele­va­tor pitch’ we have already: the Wel­come to Less Wrong piece on the front page.

I par­tic­u­larly like the por­tion right at the be­gin­ning, be­cause it grabs onto the cen­tral rea­son for want­ing to be ra­tio­nal in the first place. Start with the ab­solute ba­sics for some­thing like an ele­va­tor pitch, if you ask me.

Think­ing and de­cid­ing are cen­tral to our daily lives. The Less Wrong com­mu­nity aims to gain ex­per­tise in how hu­man brains think and de­cide, so that we can do so more suc­cess­fully.

I might cut out the part about ‘hu­man brains’ though. Talk like that tends to en­courage folks to peg you as a nerd right away, and ‘nerd’ has bag­gage you don’t want if you’re in­tro­duc­ing an av­er­age per­son.

• The ele­va­tor pitch that got me most ex­cited about ra­tio­nal­ity is from Rais­ing the San­ity Water­line. It only deals with epistemic ra­tio­nal­ity, which is an is­sue, and it, ad­mit­tedly, is best fit to­wards peo­ple who be­long to a san­ity-fo­cused minor­ity, like athe­ism or some­thing poli­ti­cal. It was phrased with re­gard to re­li­gion origi­nally, so I’ll keep it this way here, but it can eas­ily be tai­lored.

“What is ra­tio­nal­ity?”

Imag­ine you’re teach­ing a class to de­luded re­li­gious peo­ple, and you want to get them to change their mind and be­come athe­ists, but you ab­solutely can­not talk about re­li­gion in any way. What would you do? You’d have to go deeper than talk­ing about re­li­gion it­self. You’d have to teach your stu­dents how to think clearly and ac­tu­ally reeval­u­ate their be­liefs. That’s (epistemic) ra­tio­nal­ity.

“Why is ra­tio­nal­ity im­por­tant? Shouldn’t we fo­cus on re­li­gion first?”

By fo­cus­ing on ra­tio­nal­ity it­self you not only can ap­proach re­li­gion in a non-threat­en­ing way, but you can also al­ign your­self with other sane peo­ple who may care about eco­nomics or poli­tics or medicine. By work­ing to­gether you can get their sup­port, even though they may not care about athe­ism per se.

• “But you can’t ex­pect peo­ple to act ra­tio­nally. We are emo­tional crea­tures.”

Yes, we are emo­tional crea­tures. But be­ing emo­tional is not in­com­pat­i­ble with be­ing ra­tio­nal! In fact, be­ing emo­tional some­times makes us more ra­tio­nal. For ex­am­ple, anger can in­hibit some cog­ni­tive bi­ases, and peo­ple who sus­tain dam­age to “emo­tional” ar­eas of their brains do not be­come more ra­tio­nal, even when they re­tain mem­ory, log­i­cal rea­son­ing abil­ity, and fa­cil­ity with lan­guage. What we want to do is make the best pos­si­ble use of our available tools—in­clud­ing our emo­tional tools—in or­der to get the things that we re­ally want.

• Re­mem­ber that your links don’t work in speech. :D

• Clearly right. I had thought about car­ry­ing around hard-copies of pa­pers in a back­pack so that I could hand them out as I men­tion them, but … ;)

• One of the most difficult ar­gu­ments I’ve had mak­ing is con­vinc­ing peo­ple that they can be more ra­tio­nal. Some­times peo­ple have said that they’re sim­ply in­ca­pable of as­sign­ing num­bers and prob­a­bil­ities to be­liefs, even though they ac­knowl­edge that it’s su­pe­rior for de­ci­sion mak­ing.

• This. I’m skep­ti­cal of al­most ev­ery nu­mer­i­cal prob­a­bil­ity es­ti­mate I hear un­less the steps are out­lined to me.

• No joke in­tended, but how much more skep­ti­cal are you, per­centage-wise, of nu­mer­i­cal prob­a­bil­ity es­ti­mates than vague, nat­u­ral lan­guage prob­a­bil­ity es­ti­mates? Please dis­guise your in­tu­itive sense of your feel­ings as a form of math.

Ideally, de­liver your an­swer in a C-3PO voice.

• 40 per­cent.

• This may be one rea­son why peo­ple are re­luc­tant to as­sign num­bers to be­liefs in the first place. Peo­ple equate num­bers with cer­tainty and au­thor­ity, whereas a prob­a­bil­ity is just a way of say­ing how un­cer­tain you are about some­thing.

When giv­ing a num­ber for a sub­jec­tive prob­a­bil­ity, I of­ten feel like it should be a two-di­men­sional quan­tity: prob­a­bil­ity and au­thor­ity. The “au­thor­ity” figure would be an es­ti­mate of “if you dis­agree with me now but we man­age to come to an agree­ment in the next 5 min­utes, what are the chances of me hav­ing to up­date my be­liefs ver­sus you?”

• I agree that it can be difficult con­vinc­ing peo­ple that they can be more ra­tio­nal. But I think start­ing new peo­ple off with the idea of as­sign­ing prob­a­bil­ities to their be­liefs is the wrong tac­tic. It’s like try­ing to get some­one who doesn’t know how to walk, to run a marathon.

What do you think about start­ing peo­ple off with the more ac­cessible ideas on Less Wrong? I can think of things like: Sunk Costs Fal­lacy, not ar­gu­ing things “by defi­ni­tion, and ad­mit­ting to a cer­tain level of un­cer­tainty. I’m sure you can think of oth­ers.

I would bet that point­ing peo­ple to a more spe­cific idea, like those listed above, would make them more likely to feel like there are ac­tual con­cepts on LW that they per­son­ally can learn and ap­ply. It’s sort of like the “Shock Level” the­ory, but in­stead it’s “Ra­tion­al­ity Level”:

Ra­tion­al­ity Level 0- I don’t think be­ing ra­tio­nal is at all a good thing. I be­lieve 100% in my in­tu­itions!
Ra­tion­al­ity Level 1- I see how be­ing ra­tio­nal could help me, but I doubt my per­sonal abil­ity to ap­ply these tech­niques
Ra­tion­al­ity Level 2- I am try­ing to be ra­tio­nal, but rarely suc­ceed (this is where I would place my­self.) Ra­tion­al­ity Level 3- I am pretty good at this whole “ra­tio­nal­ity” thing!
Ra­tion­al­ity Level 4- I Win At Life!

I bet with some thought, some­one else can come up with a bet­ter set of “Ra­tion­al­ity Levels”.

• “So, I hear you care about ra­tio­nal­ity. What’s that about?”

Ra­tion­al­ity is about im­prov­ing your think­ing so that you make bet­ter de­ci­sions. You know, some­times you make de­ci­sions that turn out bad be­cause there is some piece of knowl­edge or in­for­ma­tion that you re­ally needed to know but didn’t. But some­times it turns out even with the same in­for­ma­tion you can make a bet­ter choice if you think about things differ­ently. In the nar­row sense ra­tio­nal­ity is get­ting your brain to make the best use of the in­for­ma­tion you have to make the best choice. In the wider sense ra­tio­nal­ity is about filling your brain up with the best in­for­ma­tion in the first place.

That might just sound like com­mon sense—that peo­ple should think care­fully about things—but it turns out that there are a whole lot of re­ally com­mon mis­takes that peo­ple don’t re­al­ize that they are mak­ing, and it re­ally is pos­si­ble to learn bet­ter pat­terns of think­ing that let you make bet­ter de­ci­sions.

• I’m not sure if this de­serves its own ar­ti­cle, so I’m post­ing it here: What would be an in­ter­est­ing cog­ni­tive bias /​ de­bi­as­ing tech­nique to cover in a [Pecha kucha] (http://​​www.pecha-kucha.org/​​what) style pre­sen­ta­tion for a col­lege writ­ing class?

Given the for­mat, it should be fairly easy to ex­plain(I have less time than ad­ver­tised, only 15 slides in­stead of 20!) So far, I’ve thought about do­ing the plan­ning fal­lacy, rep­re­sen­ta­tive­ness heuris­tic or the dis­junc­tion fal­lacy. All three are ones I can already speak ca­su­ally about and don’t leap out at me as em­pow­er­ing mo­ti­vated cog­ni­tion (...a topic which would em­power it, huh)

I would per­son­ally like to do Bayes The­o­rem, but I can’t 1) Think of a way to com­press it down to five min­utes 2) Can’t think of ways for other peo­ple to help com­press it down to five min­utes with­out also omit­ting the math.

Down­vote if this is off topic. If not, please tell me why be­cause I’ll just as­sume it’s an off­topic down­vote!

• “But you can’t ex­pect peo­ple to act ra­tio­nally. We are emo­tional crea­tures.”

This may be difficult to an­swer ap­pro­pri­ately with­out know­ing what the hy­po­thet­i­cal speaker means with “emo­tions” (or “ex­pect”, for that mat­ter). But the phrase seems to me like a po­ten­tial cached one, so ve may not know it ei­ther.

A pos­si­ble ele­va­tor re­sponse be­low:

Ra­tion­al­ity is not Vul­can-like be­hav­ior; you don’t have to re­nounce to your emo­tions in or­der to act ra­tio­nally. In­deed, for most peo­ple, many emo­tions (like af­fec­tion, won­der, or love) are very valuable, and ap­plied ra­tio­nal­ity is know­ing how to ob­tain and pro­tect what is truly pre­cious for you.
What is im­por­tant is to ra­tio­nally un­der­stand how your emo­tions af­fect your judg­ment, so you can try to con­sciously avoid or damper un­wanted emo­tional re­ac­tions that would oth­er­wise have un­de­sir­able con­se­quences for you.

• It’s about figur­ing out the mis­takes that peo­ple tend to make, so you can avoid mak­ing them. (“Like what?”) Like peo­ple aren’t good at chang­ing their minds. They only want to think about in­for­ma­tion that sup­ports what they already be­lieve. But re­ally, I should look at all the in­for­ma­tion that comes my way and de­cide—is my old be­lief re­ally true? Or should I change my mind based on the new in­for­ma­tion I got?

• Does Sark’s re­cent tweet, “In­tu­itions are ma­chines, not in­te­rior dec­o­ra­tion,” work as an ele­va­tor pitch, or is it too opaque to a non-LWer? Or is it too short? Maybe it’s a fire­man’s pole pitch.

• I find the con­scious AI re­sponse to be the most com­pel­ling. Now that I think about it, that’s more ev­i­dence for the use­ful­ness of con­crete ex­am­ples.

• “Science doesn’t know ev­ery­thing.”

Yes, but sci­ence is all about us­ing what­ever meth­ods work to pro­duce more new knowl­edge all the time. All the new knowl­edge we can pro­duce with mechanisms that we know are ac­tu­ally trust­wor­thy will even­tu­ally be­come part of sci­ence, and the only stuff that’s ul­ti­mately go­ing to get left out is in­for­ma­tion we can only gen­er­ate through means we know aren’t re­li­able at pro­duc­ing truth.

• Bi­ases from Wikipe­dia’s list of cog­ni­tive bi­ases. Cue: ex­am­ple of the bias; Re­sponse: name of the bias, pat­tern of rea­son­ing of the bias, nor­ma­tive model vi­o­lated by the bias.

Edit: put this on the wrong page ac­ci­den­tally.