So the true lesson of this post is that we should get rid of all the aggressive alpha males in our society. I guess I always found the idea obvious, but now that it has been validated, can we please start devising some plan for implementing it?
Wrongnesslessness
B: BECAUSE IT IS THE LAW.
I cannot imagine a real physicist saying something like that. Sounds more like a bad physics teacher… or a good judge.
All existence is intrinsically meaningless. After the Singularity, there will be no escape from the fate of the rat with the pleasure button. No FAI, however Friendly, will be able to work around this irremediable property of the Universe except by limiting the intelligence of people and making them go through their eternal lives in carefully designed games. (> 95%)
Also, any self-aware AI with sufficient intelligence and knowledge will immediately self-destruct or go crazy. (> 99.9%)
I agree. The waterline metaphor is not so commonly known outside LW that it would evoke anything except some watery connotations.
So, what about a nice-looking acronym like “Truth, Rationality, Universe, Eliezer”? :)
I’ve always wanted a name like that!
But I’m worried that with such a generic English name people will expect me to speak perfect English, which means they’ll be negatively surprised when they hear my noticeable accent.
Thanks for making me understand something extremely important with regard to creative work: Every creator should have a single, identifiable victim of his creations!
considering that the dangers of technology might outweigh the risks.
This should probably read “might outweigh the benefits”.
In my opinion, this second question is far from being as important as the first one. Also, please see these posting guidelines:
These traditionally go in Discussion:
a link with minimal commentary
a question or brainstorming opportunity for the Less Wrong community
Beyond that, here are some factors that suggest you should post in Main:
Your post discusses core Less Wrong topics.
The material in your post seems especially important or useful.
You put a lot of thought or effort into your post. (Citing studies, making diagrams, and agonizing over wording are good indicators of this.)
Your post is long or deals with difficult concepts. (If a post is in Main, readers know that it may take some effort to understand.)
You’ve searched the Less Wrong archives, and you’re pretty sure that you’re saying something new and non-obvious. The more of these criteria that your post meets, the better a candidate it is for Main.
If ambiguity aversion is a paradox and not just a cognitive bias, does this mean that all irrational things people systematically do are also paradoxes?
What particular definition of “paradox” are you using? E.g, which one of the definitions in the Paradox wikipedia article?
Sod off! Overt aggression is a pleasant relief compared to the subtle, catty ‘niceness’ that the most competitive humans excel at.
Hmm… Doesn’t this look like something an aggressive alpha male would say?..
Uh-oh!
And they aren’t even regular pentagons! So, it’s all real then...
But humans are crazy! Aren’t they?
The powers of instrumental rationality in the context of rapid technological progress and the inability/unwillingness of irrational people to listen to rational arguments strongly suggest the following scenario:
After realizing that turning a significant portion of the general population into rationalists would take much more time and resources than simply taking over the world, rationalists will create a global corporation with the goal of saving the humankind from the clutches of zero- and negative-sum status games.
Shortly afterwards, the Rational Megacorp will indeed take over the world and the people will get good government for the first time in the history of the human race (and will live happily ever after).
Wikipedia is accessible if you disable JavaScript (or use a mobile app, or just Google cache).
If you mean the less-fun-to-work-with part, it’s fairly obvious. You have a good idea, but the smarter person A has already thought about it (and rejected it after having a better idea). You manage to make a useful contribution, and it is immediately generalized and improved upon by the smarter persons B and C. It’s like playing a game where you have almost no control over the outcome. This problem seems related to competence and autonomy, which are two of the three basic needs involved in intrinsic motivation.
If you mean the issue of why fun is valued more than doing something that matters, it is less clear. My guess is that’s because boredom is a more immediate and pressing concern than meaningless existence (where “something that matters” is a cure for meaningless existence, and “fun” is a cure for boredom). Smart people also seem to get bored more easily, so the need to get away from boredom is probably more important for them.
When I read this:
9) To want to be the best in something has absolutely no precedence over doing something that matters.
I immediately thought of this.
On a more serious note, I have the impression that while some people (with conservative values?) do agree that doing something that matters is more important than anything else (although “something that matters” is usually something not very interesting), most creatively intelligent people go through their lives trying to optimize fun. And while it’s certainly fun to hang out with people smarter than you and learn from them, it’s much less fun to work with them.
since it’s known with great certainty that there is no afterlife, the hypothetical isn’t worth mentioning
I’m convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live “outside” their simulations after their “deaths”. Since one cannot feel one’s own nonexistence, I totally expect to experience “afterlife” some day.
Foundation for Human Sapience (or Foundation for Advanced Sapience)
Reality Transplantation Center
Thoughtful Organization
CORTEX—Center for Organized Rational Thinking and EXperimentation
OOPS—Organization for Optimal Perception Seekers
BAYES—Bureau for Advancing Yudkowsky’s Experiments in Sanity
I’m quite sure I’m not rounding when I prefer hearing a Wagner opera to hearing any number of folk dance tunes, and when I prefer reading a Vernor Vinge novel to hearing any number of Wagner operas. See also this comment for another example.
It seems, lexicographic preferences arise when one has a choice between qualitatively different experiences. In such cases, any differences in quantity, however vast, are just irrelevant. An experience of long unbearable torture cannot be quantified in terms of minor discomforts.
David Deutsch, The Beginning of Infinity