Condensed Less Wrong Wisdom: Yudkowsky Edition, Part I
Mysterious Answers to Mysterious Questions
Ask “What experiences do I anticipate?”, not “What statements do I believe?”
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation.
There’s nothing wrong with focusing your mind, narrowing your categories, excluding possibilities, and sharpening your propositions.
For every expectation of evidence, there is an equal and opposite expectation of counterevidence.
You can only ever seek evidence to test a theory, not to confirm it.
Write down your predictions in advance.
Hindsight bias devalues science: we need to make a conscious effort to be shocked enough.
Be consciously aware of the difference between an explanation and a password.
Fake explanations don’t feel fake. That’s what makes them dangerous.
What distinguishes a semantic stopsign is failure to consider the obvious next question.
Ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.
What you must avoid is skipping over the mysterious part; you must linger at the mystery to confront it directly.
You have to feel which parts of your map are still blank, and more importantly, pay attention to that feeling.
When you run into something you don’t understand, say “magic”, and leave yourself a placeholder, a reminder of work you will have to do later, and one that prevents an illusion of understanding.
Much of a rationalist’s skill is below the level of words.
Avoid positive bias: look for negative examples.
If a hypothesis does not today have a favorable likelihood ratio over “I don’t know”, it raises the question of why you today believe anything more complicated than “I don’t know”.
If you don’t know, and you guess, you’ll end up being wrong.
You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.
Never forget that there are many more ways to worship something than lighting candles around an altar.
Why should your curiosity be diminished because someone else, not you, knows how the light bulb works? Is this not spite? It’s not enough for you to know; other people must also be ignorant, or you won’t be happy?
The world around you is full of puzzles. Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery.
Inverted stupidity looks like chaos. Something hard to handle, hard to grasp, hard to guess, something you can’t do anything with.
Saying “I’m ignorant” doesn’t make you knowledgeable. But it is, at least, a different path than saying “it’s too chaotic”.
If you’re trying to go anywhere, or even just trying to survive, you had better start paying attention to the three or six dozen optimality criteria that control how you use words, definitions, categories, classes, boundaries, labels, and concepts.
Everything you do in the mind has an effect, and your brain races ahead unconsciously without your supervision.
Logic stays true, wherever you may go, So logic never tells you where you live.
Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.
Definitions don’t need words.
Words do not have intrinsic definitions.
Playing the game of Taboo—being able to describe without using the standard pointer/label/handle—is one of thefundamental rationalist capacities.
Where you see a single confusing thing, with protean and self-contradictory attributes, it is a good guess that your map is cramming too much into one point—you need to pry it apart and allocate some new buckets.
Categorizing has consequences.
People insist that “X, by definition, is a Y!” on those occasions when they’re trying to sneak in a connotation of Y that isn’t directly in the definition, and X doesn’t look all that much like other members of the Y cluster.
Just because there’s a word “art” doesn’t mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
The way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace.
Reductionism
Reality is laced together a lot more tightly than humans might like to believe.
Since the beginning not one unusual thing has ever happened.
Many philosophers share a dangerous instinct: If you give them a question, they try to answer it.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question.
If you keep asking questions, you’ll get to your destination eventually. If you decide too early that you’ve found an answer, you won’t.
When you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you’re done.
Be warned that you may believe you’re done, when all you have is a mere triumphant refutation of a mistake.
Those who dream do not know they dream, but when you wake you know you are awake.
One good cue that you’re dealing with a “wrong question” is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.
To write a wrong question, compare: “Why do I have free will?” with “Why do I think I have free will?”
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
Hug the query.
Joy in the Merely Real
Want to fly? Don’t give up on flight. Give up on flying potions and build yourself an airplane.
If I’m going to be happy anywhere, Or achieve greatness anywhere, Or learn true secrets anywhere, Or save the world anywhere, Or feel strongly anywhere, Or help people anywhere, I may as well do it in reality.
If you only care about scientific issues that are controversial, you will end up with a head stuffed full of garbage.
If we cannot take joy in the merely available, our lives will always be frustrated.
If we cannot learn to take joy in the merely real, our lives shall be empty indeed.
The novice goes astray and says “The art failed me”; the master goes astray and says “I failed my art.”
I probably missed a lot in my cursory glances. I chose things based on no objective criteria. Sometimes I paraphrased, perhaps incorrectly. There are a few other big sequences to do.
How do you know the costs of your irrationality if you’re irrational?
We’re here to talk about rationality, which is the art generated when you want something more than your particular mode of thinking
Well, if you expect the future to be just like the past, calling that “realism” isn’t going to save you from the fact that you’re guaranteed to be wrong.
...there are specific propositions, right? You can’t just bundle all the propositions together and slay them with one mighty blow that consists of one thing you can do wrong if you believe this bundle of propositions.
Curiosity requires ignorance and the ability to relinquish your ignorance, and I see you attaching a lot of importance to your ignorance here.
This sounds to me more like a mistake you are making in your model of the world than something you could actually do to the world itself.
If you want a precise practical AI, you don’t get there by starting with an imprecise practical AI and going to a precise practical AI, you start with a precise impractical AI and then go to a precise and practical AI.
You can make mistakes even if you think you have a precise theory, but if you don’t even think you have a precise theory you’re completely doomed.
One thing you need is a paragraph break (a blank line)
before and after the list. The source code should look like
this:
*Mysterious Answers to Mysterious Questions*
* Ask "What experiences do I anticipate?", not "What
statements do I believe?"
* Your strength as a rationalist is your ability to be
more confused by fiction than by reality. If you are
equally good at explaining any outcome, you have zero
knowledge.
I will do so, though I’m worried about what happens after I finish the “Yudkowsky Edition”; there’s lots of other Less Wrong wisdom but it’s further spread out and I feel like if miss some if it then people will be sad. But at any rate I plan on going back and getting the links to the posts for all the above and then doing the other sequences, then posting the whole giant thing at the top level.
Over the next few days I have a different and probably higher utility post to help Louie Helm write, though.
Condensed Less Wrong Wisdom: Yudkowsky Edition, Part I
Mysterious Answers to Mysterious Questions
Ask “What experiences do I anticipate?”, not “What statements do I believe?”
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation.
There’s nothing wrong with focusing your mind, narrowing your categories, excluding possibilities, and sharpening your propositions.
For every expectation of evidence, there is an equal and opposite expectation of counterevidence.
You can only ever seek evidence to test a theory, not to confirm it.
Write down your predictions in advance.
Hindsight bias devalues science: we need to make a conscious effort to be shocked enough.
Be consciously aware of the difference between an explanation and a password.
Fake explanations don’t feel fake. That’s what makes them dangerous.
What distinguishes a semantic stopsign is failure to consider the obvious next question.
Ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.
What you must avoid is skipping over the mysterious part; you must linger at the mystery to confront it directly.
You have to feel which parts of your map are still blank, and more importantly, pay attention to that feeling.
When you run into something you don’t understand, say “magic”, and leave yourself a placeholder, a reminder of work you will have to do later, and one that prevents an illusion of understanding.
Much of a rationalist’s skill is below the level of words.
Avoid positive bias: look for negative examples.
If a hypothesis does not today have a favorable likelihood ratio over “I don’t know”, it raises the question of why you today believe anything more complicated than “I don’t know”.
If you don’t know, and you guess, you’ll end up being wrong.
You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.
Never forget that there are many more ways to worship something than lighting candles around an altar.
Why should your curiosity be diminished because someone else, not you, knows how the light bulb works? Is this not spite? It’s not enough for you to know; other people must also be ignorant, or you won’t be happy?
The world around you is full of puzzles. Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery.
Inverted stupidity looks like chaos. Something hard to handle, hard to grasp, hard to guess, something you can’t do anything with.
Saying “I’m ignorant” doesn’t make you knowledgeable. But it is, at least, a different path than saying “it’s too chaotic”.
A Human’s Guide to Words
http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/
If you’re trying to go anywhere, or even just trying to survive, you had better start paying attention to the three or six dozen optimality criteria that control how you use words, definitions, categories, classes, boundaries, labels, and concepts.
Everything you do in the mind has an effect, and your brain races ahead unconsciously without your supervision.
Logic stays true, wherever you may go,
So logic never tells you where you live.
Before you can question your intuitions, you have to realize that what your mind’s eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.
Definitions don’t need words.
Words do not have intrinsic definitions.
Playing the game of Taboo—being able to describe without using the standard pointer/label/handle—is one of thefundamental rationalist capacities.
Where you see a single confusing thing, with protean and self-contradictory attributes, it is a good guess that your map is cramming too much into one point—you need to pry it apart and allocate some new buckets.
Categorizing has consequences.
People insist that “X, by definition, is a Y!” on those occasions when they’re trying to sneak in a connotation of Y that isn’t directly in the definition, and X doesn’t look all that much like other members of the Y cluster.
Just because there’s a word “art” doesn’t mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.
The way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace.
Reductionism
Reality is laced together a lot more tightly than humans might like to believe.
Since the beginning not one unusual thing has ever happened.
Many philosophers share a dangerous instinct: If you give them a question, they try to answer it.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question.
If you keep asking questions, you’ll get to your destination eventually. If you decide too early that you’ve found an answer, you won’t.
When you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you’re done.
Be warned that you may believe you’re done, when all you have is a mere triumphant refutation of a mistake.
Those who dream do not know they dream, but when you wake you know you are awake.
One good cue that you’re dealing with a “wrong question” is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.
To write a wrong question, compare: “Why do I have free will?” with “Why do I think I have free will?”
Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.
Hug the query.
Joy in the Merely Real
Want to fly? Don’t give up on flight. Give up on flying potions and build yourself an airplane.
If I’m going to be happy anywhere,
Or achieve greatness anywhere,
Or learn true secrets anywhere,
Or save the world anywhere,
Or feel strongly anywhere,
Or help people anywhere,
I may as well do it in reality.
If you only care about scientific issues that are controversial, you will end up with a head stuffed full of garbage.
If we cannot take joy in the merely available, our lives will always be frustrated.
If we cannot learn to take joy in the merely real, our lives shall be empty indeed.
The novice goes astray and says “The art failed me”; the master goes astray and says “I failed my art.”
I probably missed a lot in my cursory glances. I chose things based on no objective criteria. Sometimes I paraphrased, perhaps incorrectly. There are a few other big sequences to do.
Please make this a post. It is is a valuable resource that I would like to have accessible.
Should I add the other sequences first, you think? It’s already too long, and that’d double the length or more.
Actually, I say put in on the wiki.
That’s a much better idea, I think.
Doubling the length would be fine for an article. Include links back to the detailed original articles.
Agreed. Slogans/quotes are fine as reminders/summaries of points that are explained and defended in more detail, but not as substitutes for them.
From video dialogues:
How do you know the costs of your irrationality if you’re irrational?
We’re here to talk about rationality, which is the art generated when you want something more than your particular mode of thinking
Well, if you expect the future to be just like the past, calling that “realism” isn’t going to save you from the fact that you’re guaranteed to be wrong.
...there are specific propositions, right? You can’t just bundle all the propositions together and slay them with one mighty blow that consists of one thing you can do wrong if you believe this bundle of propositions.
Curiosity requires ignorance and the ability to relinquish your ignorance, and I see you attaching a lot of importance to your ignorance here.
This sounds to me more like a mistake you are making in your model of the world than something you could actually do to the world itself.
If you want a precise practical AI, you don’t get there by starting with an imprecise practical AI and going to a precise practical AI, you start with a precise impractical AI and then go to a precise and practical AI.
You can make mistakes even if you think you have a precise theory, but if you don’t even think you have a precise theory you’re completely doomed.
One thing you need is a paragraph break (a blank line) before and after the list. The source code should look like this:
[Edited to add italics to the subheading.]
How do you put in the editing characters literally? Backslashes?
Markdown.
Saved; thank you.
Thanks much.
Also see divia’s post for a spaced repetition database that could be helpful for internalizing these important thinking patterns.
This is an excellent effort. Please keep it up.
I will do so, though I’m worried about what happens after I finish the “Yudkowsky Edition”; there’s lots of other Less Wrong wisdom but it’s further spread out and I feel like if miss some if it then people will be sad. But at any rate I plan on going back and getting the links to the posts for all the above and then doing the other sequences, then posting the whole giant thing at the top level.
Over the next few days I have a different and probably higher utility post to help Louie Helm write, though.