You think that, without using uploads or AI, humanity has less than a 50% chance of surviving the next hundred years? That seems very surprising to me.
Viral threats are a danger, yes, but while they may cause massive depopulation, even kill off a significant fraction of Earth’s population (especially if genetically engineered viruses are used as a terrorist weapon), they seem unlikely to be able to kill off everyone—especially if people on small islands start shooting down any approaching planes to prevent contamination. Nanotechnological threats may be more all-inclusive, but while that might destroy an entire continent, it would seem unlikely that nanotechnology that can cross an ocean before countermeasures can be developed could be created accidentally
And within thirty years, there’s even a chance of a small colony on Mars—if they can get to the point where they’re growing their own food, rather than having it shipped from Earth, and where they have enough people to sustain their population, then even something that renders Earth uninhabitable would not wipe out all of humanity; and the Sun appears stable enough to keep going for a good few million years still.
So… am I misunderstanding you, or do you see some threat to humanity that I fail to notice?
Consider the following possibilities for how long it will take for humans to develop AI (friendly or otherwise) if we don’t >kill ourselves via viruses, nuclear catastrophe etc.
There are other possibilities. One is simply “never”, other is that AI is much less powerful than current predictions tell, third that interstellar travel is impossible, fourth that AI singletons don’t reproduce and therefore don’t colonize.
Stable totalitarianism has been suggested.
But does not exist.
Another would be a zero privacy world, where anyone could spy on anyone else. And would be able to press an >alarm button if they see anyone doing something dangerous (then everyone democratically votes to lynch them?).
There are lots of problems with this concept. But first, to reduce global risks that requires world goverment, and it almost certainly will stop progress.
And within thirty years, there’s even a chance of a small colony on Mars
Chance to have sustainable colony in foreseeble future (~20 years) are close to zero.
You think that, without using uploads or AI, humanity has less than a 50% chance of surviving the next hundred years? That seems very surprising to me.
Viral threats are a danger, yes, but while they may cause massive depopulation, even kill off a significant fraction of Earth’s population (especially if genetically engineered viruses are used as a terrorist weapon), they seem unlikely to be able to kill off everyone—especially if people on small islands start shooting down any approaching planes to prevent contamination. Nanotechnological threats may be more all-inclusive, but while that might destroy an entire continent, it would seem unlikely that nanotechnology that can cross an ocean before countermeasures can be developed could be created accidentally
And within thirty years, there’s even a chance of a small colony on Mars—if they can get to the point where they’re growing their own food, rather than having it shipped from Earth, and where they have enough people to sustain their population, then even something that renders Earth uninhabitable would not wipe out all of humanity; and the Sun appears stable enough to keep going for a good few million years still.
So… am I misunderstanding you, or do you see some threat to humanity that I fail to notice?
There are other possibilities. One is simply “never”, other is that AI is much less powerful than current predictions tell, third that interstellar travel is impossible, fourth that AI singletons don’t reproduce and therefore don’t colonize.
But does not exist.
There are lots of problems with this concept. But first, to reduce global risks that requires world goverment, and it almost certainly will stop progress.
Chance to have sustainable colony in foreseeble future (~20 years) are close to zero.