Elevator pitches/​responses for rationality /​ AI

I’m trying to develop a large set of elevator pitches /​ elevator responses for the two major topics of LW: rationality and AI.

An elevator pitch lasts 20-60 seconds, and is not necessarily prompted by anything, or at most is prompted by something very vague like “So, I heard you talking about ‘rationality’. What’s that about?”

An elevator response is a 20-60 second, highly optimized response to a commonly heard sentence or idea, for example, “Science doesn’t know everything.”

Examples (but I hope you can improve upon them):

“So, I hear you care about rationality. What’s that about?”

Well, we all have beliefs about the world, and we use those beliefs to make decisions that we think will bring us the most of what we want. What most people don’t realize is that there is a mathematically optimal way to update your beliefs in response to evidence, and a mathematically optimal way to figure out which decision is most likely to bring you the most of what you want, and these methods are defined by probability theory and decision theory. Moreover, cognitive science has discovered a long list of predictable mistakes our brains make when forming beliefs and making decisions, and there are particular things we can do to improve our beliefs and decisions. [This is the abstract version; probably better to open with a concrete and vivid example.]

“Science doesn’t know everything.”

As the comedian Dara O’Briain once said, science knows it doesn’t know everything, or else it’d stop. But just because science doesn’t know everything doesn’t mean you can use whatever theory most appeals to you. Anybody can do that, and use whatever crazy theory they want.

“But you can’t expect people to act rationally. We are emotional creatures.”

But of course. Expecting people to be rational is irrational. If you expect people to usually be rational, you’re ignoring an enormous amount of evidence about how humans work.

“But sometimes you can’t wait until you have all the information you need. Sometimes you need to act right away.”

But of course. You have to weigh the cost of new information with the expected value of that new information. Sometimes it’s best to just act on the best of what you know right now.

“But we have to use intuition sometimes. And sometimes, my intuitions are pretty good!”

But of course. We even have lots of data on which situations are conducive to intuitive judgment, and which ones are not. And sometimes, it’s rational to use your intuition because it’s the best you’ve got and you don’t have time to write out a bunch of probability calculations.

“But I’m not sure an AI can ever be conscious.”

That won’t keep it from being “intelligent” in the sense of being very good at optimizing the world according to its preferences. A chess computer is great at optimizing the chess board according to its preferences, and it doesn’t need to be conscious to do so.

Please post your own elevator pitches and responses in the comments, and vote for your favorites!