[bored now. comment deleted.]
“Caledonian, I look forward to being able to downvote your comments instead of deleting them.”
What, the software forces you to delete my comments? Someone’s holding a gun to your head?
I look forward to your forming a completely closed memetic sphere around yourself, instead of this partially-closed system you’ve already established.
“you will get a semi-obsessed sub-culture of users with a few shared biases who effectively take over”
Of course! That’s the point of the exercise.
The hope is that the shared biases will be ones that the site owner considers valuable and useful, and that the prospective audience for the site wants to read. A completely unbiased user culture would view anything that was posted (or not posted) as equally valuable. What use is that?
Besides, the site as it stands is already dominated by two people’s biases… and as Eliezer seems to do most of the moderation, it’s effectively one person. If that were a problem, why are you here?
Specifying an entire world by listing every single thing you want to be included in it would take a very long time. Most worlds complex enough to be interesting are far too complicated to talk about in that manner.
Perhaps it would be more efficient to list the specific things you want to be excluded. Presumably the set of things you object to is far smaller than those you prefer or are neutral towards.
Because I’m curious:
How much evidence, and what kind, would be necessary before suspicions of contrarianism are rejected in favor of the conclusion that the belief was wrong?
Surely this is a relevant question for a Bayesian.
I would personally be more concerned about an AI trying to make me deliriously happy no matter what methods it used.
Happiness is part of our cybernetic feedback mechanism. It’s designed to end once we’re on a particular course of action, just as pain ends when we act to prevent damage to ourselves. It’s not capable of being a permanent state, unless we drive our nervous system to such an extreme that we break its ability to adjust, and that would probably be lethal.
Any method of producing constant happiness ultimately turns out to be pretty much equivalent to heroin—you compensate so that even extreme levels of the stimulus have no effect, forming the new functional baseline, and the old equilibrium becomes excruciating agony for as long as the compensations remain. Addiction—and desensitization—is inevitable.
Few people become bored with jumping in SMB because
1) becoming skilled at it is quite hard,
2) it’s used to accomplish specific tasks and is quite useful in that context,
3) it’s easier to become bored with the game as a whole than with that particular part of it.
Having to take action to avoid unpleasant surprises is usually pleasant, as long as your personal resources aren’t stretched too much in the process.
If you eliminate the potential for unpleasant surprises, the game isn’t much fun. (Imagine playing chess against an opponent that was so predictable as to never threaten to beat you. Why bother?)
Lots of people find planning their character design decisions, and exploring in detail the mechanical consequences of their designs, to be ‘fun’.
Which is why there are so many sites that (for example) post in their entirety the skills for Diablo II and how each additional skillpoint affects the result—information that cannot be easily acquired from the game itself.
Although there are some basic principles behind ‘fun’, the specific things that make something ‘fun’ vary wildly from one person to another. If what the designers created wasn’t to your taste, perhaps it’s not that they failed, but that you’re not a member of their target audience.
Gwern, why do you think we have those emotional responses to pain in the first place?
Yes, I’m aware of forms of brain damage that make people not care about negative stimuli. They’re extraordinarily crippling.
Nancy Lebovitz, those are great. I may have to appropriate some of those.
I’d say the primary bad thing about pain is not that it hurts, but that it’s pushy and won’t tune out. You could learn to sleep in a ship’s engine room, but a mere stubbed toe grabs and holds your attention.
That, I think we could delete with impunity.
If we could learn to simply get along with any level of pain… how would it constitute an obstacle?
Real accomplishment requires real obstacles to avoid, remove, or transcend. Real obstacles require real consequences. And real consequences require pain.
I would suggest that this book, and the two books immediately preceding it, are an examination of the difference between what people believe they want the world to be and what they actually want and need it to be. When people gain enough power to create their vision of the perfect world, they do—and then find they’ve constructed an elaborate prison at best and a slow and terrible death at worst.
An actual “perfect world” can’t be safe, controlled, or certain—and the inevitable consequence of that is pain. But so is delight.
The opposite of a Great Truth is unpretentiousness.
I admire your persistence; however, you should be reminded that preaching to the deaf is not a particularly worthwhile activity.
My own complaints regarding the Brave New World consist mainly of noting that Huxley’s dystopia specialized in making people fit the needs of society. And if meant whittling down a square peg so it would fit into a round hole, so be it.
Embryos were intentionally damaged (primarily through exposure to alcohol) so that they would be unlikely to have capabilities beyond what society needed them to.
This is completely incompatible with my beliefs about the necessity of self-regulating feedback loops, and developing order from the bottom upwards.
It’s really quite simple: the people who designed and maintain the legal system faced a choice. Is it better for the system to be consistent but endlessly repeat its mistakes, or inconsistent but error-correcting?
They preferred it to be predictable.
And that is why it is absurd to call it a “justice system”. It’s not concerned with justice.
Or, to put it another way:
“Fixing” the future, in a way that renders human beings completely redundant and unnecessary even to themselves, isn’t fixing anything. It’s creating a problem of unlimited scope.
If that’s the ultimate outcome of, say, producing superhuman minds—whether they’re somehow enslaved to human preferences or not—then we’re trying very hard to create a world in which the only rational treatment of humanity is extinction. Whether imposed from without or from within, voluntarily, is irrelevant.
Based on the comments here, it would seem that it’s the people who reject ultimately-meaningless forms of play—that is, ‘play’ that doesn’t develop skills useful to perpetuation—and concentrate on the “real world” who will end up existing.
And the Luddites will inherit the Earth...
The mere fact that he has put so much time and energy into working on this issue over many years is strong evidence that he sincerely believes that it is a real possibility
Only if there are no other consequences of his actions that he desires. People working to forward an ideology don’t necessary believe the ideology they’re selling—they only need to value some of the consequences of spreading it.