I’ll answer with some of the “rationalist” ideas I’ve personally learned and consider important. (I expect that I still have important knowledge gaps compared to other experienced rationalists though. I’m working on it.)
Raising the Sanity Waterline is a mini-sequence in its own right, if you read its links. The worldviews taught by all major religions are extremely confused at best—including (especially) yours, if you still have one—and if this isn’t already blatantly obvious, your epistemology is very broken! People are crazy, the world is mad.
There are No Guardrails. The Universe is crushingly indifferent to human well-being. Everything is allowed to go horribly wrong.
Rationalists Should Win. Or “Instrumental Rationality”. This does not mean “Go team!” It means that if you’re not winning, then you’re doing rationality wrong, and you should correct your errors, regardless of what the “rationalists” or the “great teacher” think. This can make “rationality” hard to pin down, but the principle of correcting one’s errors is very important because it catches a lot of failure modes on the path. Then why not just say, “Correct your errors”? Because there are a lot of ways of misidentifying errors, and “not winning” cuts through all that and gets to the heart of what errors are.
You have to be willing to be correct even when that makes you weird. Doing better than normal is, by definition, abnormal. But the goal is correctness, not weirdness. Reversed stupidity is not intelligence.
Value your Slack. Guard it. Spend it only when Worth It. The world is out to get your Slack! If you lose it, fight to get it back!
You Need More Money. I wrote this one, but it’s a synthesis of earlier rationalist thought.
The mountains of philosophy are the foothills of AI. Even if you’re not planning to become an AI researcher yourself, understanding how to see the world through the lens of AI clarifies many important things.
Read textbooks. (And you can download almost all of them if you know where to look.)
I’ll answer with some of the “rationalist” ideas I’ve personally learned and consider important. (I expect that I still have important knowledge gaps compared to other experienced rationalists though. I’m working on it.)
Intelligence is not the same thing as rationality. So much of “Epistemic Rationality” comes down to not lying to yourself. Clever people tell themselves clever lies. The Litany of Tarski is the correct attitude for a rationalist.
Raising the Sanity Waterline is a mini-sequence in its own right, if you read its links. The worldviews taught by all major religions are extremely confused at best—including (especially) yours, if you still have one—and if this isn’t already blatantly obvious, your epistemology is very broken! People are crazy, the world is mad.
Death is Bad.
There are No Guardrails. The Universe is crushingly indifferent to human well-being. Everything is allowed to go horribly wrong.
Rationalists Should Win. Or “Instrumental Rationality”. This does not mean “Go team!” It means that if you’re not winning, then you’re doing rationality wrong, and you should correct your errors, regardless of what the “rationalists” or the “great teacher” think. This can make “rationality” hard to pin down, but the principle of correcting one’s errors is very important because it catches a lot of failure modes on the path. Then why not just say, “Correct your errors”? Because there are a lot of ways of misidentifying errors, and “not winning” cuts through all that and gets to the heart of what errors are.
You have to be willing to be correct even when that makes you weird. Doing better than normal is, by definition, abnormal. But the goal is correctness, not weirdness. Reversed stupidity is not intelligence.
Understand Social Status.
Value your Slack. Guard it. Spend it only when Worth It. The world is out to get your Slack! If you lose it, fight to get it back!
You Need More Money. I wrote this one, but it’s a synthesis of earlier rationalist thought.
The mountains of philosophy are the foothills of AI. Even if you’re not planning to become an AI researcher yourself, understanding how to see the world through the lens of AI clarifies many important things.
Read textbooks. (And you can download almost all of them if you know where to look.)
Google how not to suck at X.