Assuming your argument is correct, wouldn’t it make more sense to blow ourselves up with nukes rather than pollute the universe with UFAI? There may be other intelligent civilizations out there leading worthwhile lives that we threaten unfairly by unleashing UFAI.
There might be countless planets that are about to undergo an evolutionary arms race for the next few billions years resulting in a lot of suffering. It is very unlikely that there is a single source of life that is exactly on the right stage of evolution with exactly the right mind design to not only lead worthwhile lives but also get their AI technology exactly right to not turn everything into a living hell.
In case you assign negative utility to suffering, which is likely to be universally accepted to have negative utility, then given that you are an expected utility maximizer it should be a serious consideration to end all life. Because 1) agents that are an effect of evolution have complex values 2) to satisfy complex values you need to meet complex circumstances 3) complex systems can fail in complex ways 4) any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways.
For instance, say I were invincible. I could theoretically stab everyone I meet without any consequences, but I doubt I would do that. And I’m just human.
To name just one example where things could go horrible wrong. Humans are by their very nature interested in domination and sex. Our aversion against sexual exploitation is largely dependent on the memeplex of our cultural and societal circumstances. If you knew more, were smarter and could think faster you might very well realize that such an aversion is a unnecessary remnant that you can easily extinguish to open up new pathways to gain utility. That Gandhi would not agree to have his brain modified into a baby-eater is incredible naive. Given the technology people will alter their preferences and personality. Many people actually perceive their moral reservations to be limiting. It only takes some amount of insight to just overcome such limitations.
You simply can’t be sure that future won’t hold vast amounts of negative utility. It is much easier for things to go horrible wrong than to be barely acceptable.
Goodness may seem mystical and amazingly complex from our current viewpoint, but is it really as complex as all that?
Maybe not, but betting on the possibility that goodness can be easily achieved is like pulling a random AI from mind design space hoping that it turns out to be friendly.
You simply can’t be sure that future won’t hold vast amounts of negative utility. It is much easier for things to go horrible wrong than to be barely acceptable.
Similarly, it is easier to make piles of rubble than skyscrapers. Yet—amazingly—there are plenty of skyscrapers out there. Obviously something funny is going on...
There might be countless planets that are about to undergo an evolutionary arms race for the next few billions years resulting in a lot of suffering. It is very unlikely that there is a single source of life that is exactly on the right stage of evolution with exactly the right mind design to not only lead worthwhile lives but also get their AI technology exactly right to not turn everything into a living hell.
In case you assign negative utility to suffering, which is likely to be universally accepted to have negative utility, then given that you are an expected utility maximizer it should be a serious consideration to end all life. Because 1) agents that are an effect of evolution have complex values 2) to satisfy complex values you need to meet complex circumstances 3) complex systems can fail in complex ways 4) any attempt at friendly AI, which is incredible complex, is likely to fail in unforeseeable ways.
To name just one example where things could go horrible wrong. Humans are by their very nature interested in domination and sex. Our aversion against sexual exploitation is largely dependent on the memeplex of our cultural and societal circumstances. If you knew more, were smarter and could think faster you might very well realize that such an aversion is a unnecessary remnant that you can easily extinguish to open up new pathways to gain utility. That Gandhi would not agree to have his brain modified into a baby-eater is incredible naive. Given the technology people will alter their preferences and personality. Many people actually perceive their moral reservations to be limiting. It only takes some amount of insight to just overcome such limitations.
You simply can’t be sure that future won’t hold vast amounts of negative utility. It is much easier for things to go horrible wrong than to be barely acceptable.
Maybe not, but betting on the possibility that goodness can be easily achieved is like pulling a random AI from mind design space hoping that it turns out to be friendly.
Similarly, it is easier to make piles of rubble than skyscrapers. Yet—amazingly—there are plenty of skyscrapers out there. Obviously something funny is going on...