I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy. After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation! I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible. If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many. In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
I think the crux is I don’t believe political will/popular action will matter until AI can clearly automate ~all jobs, for both reasonable and unreasonable reasons, and I think this is far too late to do much of anything by default, in the sense that the point of no return was way earlier.
In order for political action to be useful, it needs to be done when there are real signs that AI could for example automate AI research, not when the event has already happened.
I would expect that the absence of a global catastrophe for ~2 years after the creation of AGI would increase the chances of most people’s survival. Especially in a scenario where alignment was easy.
After all, then there will be time for political and popular action. We can expect something strange when politicians and their voters finally understand the existential horror of the situation!
I don’t know. Attempts to ban all AI? The Butlerian jihad? Nationalization of AI companies? Revolutions and military coups? Everything seems possible.
If AI respects the right to property, why shouldn’t it respect the right to UBI if such a law is passed? The rapid growth of the economy will make it possible to feed many.
In fact, a world in which someone shrugs their shoulders and allows 99% of the population to die seems obviously unsafe for the remaining 1%.
I think the crux is I don’t believe political will/popular action will matter until AI can clearly automate ~all jobs, for both reasonable and unreasonable reasons, and I think this is far too late to do much of anything by default, in the sense that the point of no return was way earlier.
In order for political action to be useful, it needs to be done when there are real signs that AI could for example automate AI research, not when the event has already happened.