Paperclip optimizer problem, yes. The problem here is in the assumption that a sentient self-programming entity could not adjust its valuative norms in just the same way that you and I do—or perhaps even more greatly so, as a result of being more generally capable than we are.
Human values change in part because we aren’t optimizers in any substantial sense. We’re giant mechas for moving around DNA (after the RNA’s replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predators and other humans. Only then something went wrong and the mechas got too smart from runaway sexual selection. This narrative may be slightly wrong, but something close to it is correct. More to the point, for much of human history, having values that were that different from peers was a good way to not have reproductive success. Humans were selected for having incoherent, inconsistent, fluid value systems.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
Regarding the empire, I may need to apologize; I think I have more negative connotations to the word “empire” than were stated explicitly in my remark and that they are not shared. Here’s possibly a slightly different analogy that may help: If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
The Imperium in a 40K like universe and the UFP in a Star Trek like universe. Switching them would be disastrous in either case. Not that either is optimal even for its own environment, and the actual universe is extremely unlikely to resemble either fiction. I agree that, given an unlikely future where humans still in control of their policies expand into space and encounter aliens, being able to afford being nice to them is better not being able to, and actually being nice to them is better than not if one can afford to.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
I was assuming the latter. As to the former, again: hence my caveat. I don’t much care what the possibility of AGI mindspace is, I’ve already arbitrarily limited the kinds I’m talking about to a very narrow window.
So objecting to my valuative statement regarding that narrow window with the statement, “But there’s no reason to think it would be in that window!”—just shows that you’re lacking reading skills, to be quite frank.
I don’t much care what the range of possible values is for f(x) for x=0..10000000, when I’ve already asked the question what is f(10)? If it’s a sentient entity that is recursively intelligent, then at some point it alone would become more “cognizant” than the entire human race put together.
If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?
Human values change in part because we aren’t optimizers in any substantial sense. We’re giant mechas for moving around DNA (after the RNA’s replication process got hijacked) that have been built blindly by evolution for an environment where the primary dangers were large predators and other humans. Only then something went wrong and the mechas got too smart from runaway sexual selection. This narrative may be slightly wrong, but something close to it is correct. More to the point, for much of human history, having values that were that different from peers was a good way to not have reproductive success. Humans were selected for having incoherent, inconsistent, fluid value systems.
There’s no reason to think that an AGI will fall into that category. Moreover, note that even powerful humans prefer to impose their values on others rather than alter their own values. A sufficiently powerful AGI would likely do likewise.
Regarding the empire, I may need to apologize; I think I have more negative connotations to the word “empire” than were stated explicitly in my remark and that they are not shared. Here’s possibly a slightly different analogy that may help: If you have to choose between a future with the United Federation of Planets from Startrek or the Imperium from Warhammer 40K, which would you choose?
Not Logos, but:
The Imperium in a 40K like universe and the UFP in a Star Trek like universe. Switching them would be disastrous in either case. Not that either is optimal even for its own environment, and the actual universe is extremely unlikely to resemble either fiction. I agree that, given an unlikely future where humans still in control of their policies expand into space and encounter aliens, being able to afford being nice to them is better not being able to, and actually being nice to them is better than not if one can afford to.
I was assuming the latter. As to the former, again: hence my caveat. I don’t much care what the possibility of AGI mindspace is, I’ve already arbitrarily limited the kinds I’m talking about to a very narrow window.
So objecting to my valuative statement regarding that narrow window with the statement, “But there’s no reason to think it would be in that window!”—just shows that you’re lacking reading skills, to be quite frank.
I don’t much care what the range of possible values is for f(x) for x=0..10000000, when I’ve already asked the question what is f(10)? If it’s a sentient entity that is recursively intelligent, then at some point it alone would become more “cognizant” than the entire human race put together.
If you were put in a situation where you had to choose between letting the world be populated by cows, or by people, which would you choose?