Nor did I see anything in your preceding “rigorous” posts to establish that being modified fell in this range It appeared to be a moral assertion for which no argument was given.
Yeah. Eliezer, in your story, being modified just didn’t seem bad enough to be obviously preferable to killing 15 billion people. This creates moral ambiguity that is great in a story, but not if you wanted to communicate a clear moral.
The way the story was presented, I was think “humanity without suffering, and having to eat non-sentient babies?… Is that really so bad to justify killing 15 billion people?” Now, as a reader of Overcoming Bias, I know that Value is Fragile, and that scaling up the human brain is a highly risky proposition that that the brain is not designed for. So, the end result of the Superhappy proposal would not be “Humans minus suffering, eating non-sentient babies.” It would not be human at all.
Superhappies can’t just surgically remove negative emotions and pain from our brains and leave everything else untouched. It would be likely the Superhappies would make a first pass and remove suffering, but the neurochemical changes would drive us all insane (happily insane). The Superhappies would then have to make another pass to stabilize our brains, which would involve messing with who-knows-what. But stabilize us towards what? The Superhappies can’t know the “right way” to make a sane human brain which doesn’t experience suffering, because no such thing exists. If the Superhappies were ever a loss of what to do, then would probably just alter us in the direction of their own values and psychology. The end result of the Superhappy’s working on us would probably think like a Superhappy, except with some token human values.
Even if the Superhappies were able to strip away human pain without mishap, there could be negative unintended consequences. If you remove negative emotions, you would actually disinhibit a lot of antisocial human behavior due to the loss of shame and guilt. Then the Superhappies would have to remove any aggressive or antisocial impulses we have, resulting in even more changes, which would all lead to a risk of insanity, or other problems that require even more “fixes.”
Any modification the Superhappies make is only going to lead to consequences which result in even more modifications, which have their own consequences. When does this stop? I think the answer is that it doesn’t stop until the product is much more Superhappy than it is human. (If instead the Superhappies were to let the humans be in charge of modifying themselves, then a higher degree of continuity with past humanity might be preserved.)
So Eliezer, you and I know the potential pitfalls of modifying humans, but since the story doesn’t show them, the Superhappy proposal looks overly attractive, and the humans who resist it look excessively close-minded and trigger-happy in killing 15 billion of their own kind in order to resist something that just doesn’t seem as bad (in the context of the story). To truly complete the story to show what you want it to show, you could have a second part of the normal ending that shows exactly why the Superhappy proposal is so bad based on your writings about the riskiness of brain modification.
Tyrrell said:
Yeah. Eliezer, in your story, being modified just didn’t seem bad enough to be obviously preferable to killing 15 billion people. This creates moral ambiguity that is great in a story, but not if you wanted to communicate a clear moral.
The way the story was presented, I was think “humanity without suffering, and having to eat non-sentient babies?… Is that really so bad to justify killing 15 billion people?” Now, as a reader of Overcoming Bias, I know that Value is Fragile, and that scaling up the human brain is a highly risky proposition that that the brain is not designed for. So, the end result of the Superhappy proposal would not be “Humans minus suffering, eating non-sentient babies.” It would not be human at all.
Superhappies can’t just surgically remove negative emotions and pain from our brains and leave everything else untouched. It would be likely the Superhappies would make a first pass and remove suffering, but the neurochemical changes would drive us all insane (happily insane). The Superhappies would then have to make another pass to stabilize our brains, which would involve messing with who-knows-what. But stabilize us towards what? The Superhappies can’t know the “right way” to make a sane human brain which doesn’t experience suffering, because no such thing exists. If the Superhappies were ever a loss of what to do, then would probably just alter us in the direction of their own values and psychology. The end result of the Superhappy’s working on us would probably think like a Superhappy, except with some token human values.
Even if the Superhappies were able to strip away human pain without mishap, there could be negative unintended consequences. If you remove negative emotions, you would actually disinhibit a lot of antisocial human behavior due to the loss of shame and guilt. Then the Superhappies would have to remove any aggressive or antisocial impulses we have, resulting in even more changes, which would all lead to a risk of insanity, or other problems that require even more “fixes.”
Any modification the Superhappies make is only going to lead to consequences which result in even more modifications, which have their own consequences. When does this stop? I think the answer is that it doesn’t stop until the product is much more Superhappy than it is human. (If instead the Superhappies were to let the humans be in charge of modifying themselves, then a higher degree of continuity with past humanity might be preserved.)
So Eliezer, you and I know the potential pitfalls of modifying humans, but since the story doesn’t show them, the Superhappy proposal looks overly attractive, and the humans who resist it look excessively close-minded and trigger-happy in killing 15 billion of their own kind in order to resist something that just doesn’t seem as bad (in the context of the story). To truly complete the story to show what you want it to show, you could have a second part of the normal ending that shows exactly why the Superhappy proposal is so bad based on your writings about the riskiness of brain modification.