Elevator pitches/​responses for rationality /​ AI

I’m try­ing to de­velop a large set of ele­va­tor pitches /​ ele­va­tor re­sponses for the two ma­jor top­ics of LW: ra­tio­nal­ity and AI.

An ele­va­tor pitch lasts 20-60 sec­onds, and is not nec­es­sar­ily prompted by any­thing, or at most is prompted by some­thing very vague like “So, I heard you talk­ing about ‘ra­tio­nal­ity’. What’s that about?”

An ele­va­tor re­sponse is a 20-60 sec­ond, highly op­ti­mized re­sponse to a com­monly heard sen­tence or idea, for ex­am­ple, “Science doesn’t know ev­ery­thing.”

Ex­am­ples (but I hope you can im­prove upon them):

“So, I hear you care about ra­tio­nal­ity. What’s that about?”

Well, we all have be­liefs about the world, and we use those be­liefs to make de­ci­sions that we think will bring us the most of what we want. What most peo­ple don’t re­al­ize is that there is a math­e­mat­i­cally op­ti­mal way to up­date your be­liefs in re­sponse to ev­i­dence, and a math­e­mat­i­cally op­ti­mal way to figure out which de­ci­sion is most likely to bring you the most of what you want, and these meth­ods are defined by prob­a­bil­ity the­ory and de­ci­sion the­ory. More­over, cog­ni­tive sci­ence has dis­cov­ered a long list of pre­dictable mis­takes our brains make when form­ing be­liefs and mak­ing de­ci­sions, and there are par­tic­u­lar things we can do to im­prove our be­liefs and de­ci­sions. [This is the ab­stract ver­sion; prob­a­bly bet­ter to open with a con­crete and vivid ex­am­ple.]

“Science doesn’t know ev­ery­thing.”

As the co­me­dian Dara O’Bri­ain once said, sci­ence knows it doesn’t know ev­ery­thing, or else it’d stop. But just be­cause sci­ence doesn’t know ev­ery­thing doesn’t mean you can use what­ever the­ory most ap­peals to you. Any­body can do that, and use what­ever crazy the­ory they want.

“But you can’t ex­pect peo­ple to act ra­tio­nally. We are emo­tional crea­tures.”

But of course. Ex­pect­ing peo­ple to be ra­tio­nal is ir­ra­tional. If you ex­pect peo­ple to usu­ally be ra­tio­nal, you’re ig­nor­ing an enor­mous amount of ev­i­dence about how hu­mans work.

“But some­times you can’t wait un­til you have all the in­for­ma­tion you need. Some­times you need to act right away.”

But of course. You have to weigh the cost of new in­for­ma­tion with the ex­pected value of that new in­for­ma­tion. Some­times it’s best to just act on the best of what you know right now.

“But we have to use in­tu­ition some­times. And some­times, my in­tu­itions are pretty good!”

But of course. We even have lots of data on which situ­a­tions are con­ducive to in­tu­itive judg­ment, and which ones are not. And some­times, it’s ra­tio­nal to use your in­tu­ition be­cause it’s the best you’ve got and you don’t have time to write out a bunch of prob­a­bil­ity calcu­la­tions.

“But I’m not sure an AI can ever be con­scious.”

That won’t keep it from be­ing “in­tel­li­gent” in the sense of be­ing very good at op­ti­miz­ing the world ac­cord­ing to its prefer­ences. A chess com­puter is great at op­ti­miz­ing the chess board ac­cord­ing to its prefer­ences, and it doesn’t need to be con­scious to do so.

Please post your own ele­va­tor pitches and re­sponses in the com­ments, and vote for your fa­vorites!