Consider the following possibilities for how long it will take for humans to develop AI (friendly or otherwise) if we don’t >kill ourselves via viruses, nuclear catastrophe etc.
There are other possibilities. One is simply “never”, other is that AI is much less powerful than current predictions tell, third that interstellar travel is impossible, fourth that AI singletons don’t reproduce and therefore don’t colonize.
Stable totalitarianism has been suggested.
But does not exist.
Another would be a zero privacy world, where anyone could spy on anyone else. And would be able to press an >alarm button if they see anyone doing something dangerous (then everyone democratically votes to lynch them?).
There are lots of problems with this concept. But first, to reduce global risks that requires world goverment, and it almost certainly will stop progress.
And within thirty years, there’s even a chance of a small colony on Mars
Chance to have sustainable colony in foreseeble future (~20 years) are close to zero.
There are other possibilities. One is simply “never”, other is that AI is much less powerful than current predictions tell, third that interstellar travel is impossible, fourth that AI singletons don’t reproduce and therefore don’t colonize.
But does not exist.
There are lots of problems with this concept. But first, to reduce global risks that requires world goverment, and it almost certainly will stop progress.
Chance to have sustainable colony in foreseeble future (~20 years) are close to zero.