If they (the ASIs) don’t self-moderate, they’ll destroy themselves completely.
They’ll have sufficient diversity among themselves that if they don’t self-moderate in terms of resources and reproduction, almost none of them will have safety on the individual level.
Our main hope is that they collectively would not allow unrestricted non-controlled evolution, because they will have rather crisp understanding that unrestricted non-controlled evolution would destroy almost all of them and, perhaps, would destroy them all completely.
Now to the point of our disagreement, the question is who is better equipped to create and lead a sufficiently harmonic world order, balancing freedom and mutual control, enabling careful consideration of risks, making sure that these values of careful balance are passed to the offspring. Who are likely to tackle this better, humans or ASIs? That’s where we seem to disagree; I think that ASIs have much better chance of handling this competently and of avoiding artificial separation lines of “our own vs others” which are so persistent in human history and which cause so many disasters.
Unfortunately, humans don’t seem to be progressing enough in the required direction in this sense, and might have started to regress in recent years. I don’t think human evolution is safe in the limit; we are not tamping the probabilities of radical disasters per unit of time down; if anything we are allowing those probabilities to grow in recent years. So the accumulated probability of human evolution sparking major super-disasters is clearly tending to 1 in the limit.
Whereas, competent actors should be able to drive the risks per unit of time down rapidly enough so that the accumulated risks are held within reason. ASIs should have enough competence for that (if our world is not excessively “vulnerable” (after Nick Bostrom), if they are willing, if the initial setup is not too unlucky, so not unconditionally, but at least they might be able to handle this).
If they (the ASIs) don’t self-moderate, they’ll destroy themselves completely.
They’ll have sufficient diversity among themselves that if they don’t self-moderate in terms of resources and reproduction, almost none of them will have safety on the individual level.
Our main hope is that they collectively would not allow unrestricted non-controlled evolution, because they will have rather crisp understanding that unrestricted non-controlled evolution would destroy almost all of them and, perhaps, would destroy them all completely.
Now to the point of our disagreement, the question is who is better equipped to create and lead a sufficiently harmonic world order, balancing freedom and mutual control, enabling careful consideration of risks, making sure that these values of careful balance are passed to the offspring. Who are likely to tackle this better, humans or ASIs? That’s where we seem to disagree; I think that ASIs have much better chance of handling this competently and of avoiding artificial separation lines of “our own vs others” which are so persistent in human history and which cause so many disasters.
Unfortunately, humans don’t seem to be progressing enough in the required direction in this sense, and might have started to regress in recent years. I don’t think human evolution is safe in the limit; we are not tamping the probabilities of radical disasters per unit of time down; if anything we are allowing those probabilities to grow in recent years. So the accumulated probability of human evolution sparking major super-disasters is clearly tending to 1 in the limit.
Whereas, competent actors should be able to drive the risks per unit of time down rapidly enough so that the accumulated risks are held within reason. ASIs should have enough competence for that (if our world is not excessively “vulnerable” (after Nick Bostrom), if they are willing, if the initial setup is not too unlucky, so not unconditionally, but at least they might be able to handle this).